CN111210819A - Information processing method and device and electronic equipment - Google Patents

Information processing method and device and electronic equipment Download PDF

Info

Publication number
CN111210819A
CN111210819A CN201911423707.2A CN201911423707A CN111210819A CN 111210819 A CN111210819 A CN 111210819A CN 201911423707 A CN201911423707 A CN 201911423707A CN 111210819 A CN111210819 A CN 111210819A
Authority
CN
China
Prior art keywords
audio data
information
data
instruction
response
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911423707.2A
Other languages
Chinese (zh)
Other versions
CN111210819B (en
Inventor
杨卫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911423707.2A priority Critical patent/CN111210819B/en
Publication of CN111210819A publication Critical patent/CN111210819A/en
Application granted granted Critical
Publication of CN111210819B publication Critical patent/CN111210819B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • H04L12/282Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

The application discloses an information processing method, an information processing device and electronic equipment, wherein the method comprises the steps that in the process that first audio data obtained in a first mode are played by first equipment, second audio data are obtained in a second mode; the first device processes the second audio data and executes the first instruction if the second audio data includes target audio data that matches the first reference data. The method and the device have the advantages that even if the first device plays the audio data, other audio data can be responded and processed, the influence of the playing of the audio data of the device on the processing of other audio data is reduced, and the experience effect of a user is improved.

Description

Information processing method and device and electronic equipment
Technical Field
The present application relates to the field of information processing technologies, and in particular, to an information processing method and apparatus, and an electronic device.
Background
Along with the development of electronic equipment, more and more electronic equipment has a voice interaction function, but in the voice interaction process between a user and the electronic equipment, the user can be influenced by the audio played by the electronic equipment or the environmental audio, so that the user and the electronic equipment cannot perform voice interaction, or the experience effect of semantic interaction is reduced.
Disclosure of Invention
In view of this, the present application provides the following technical solutions:
an information processing method comprising:
the method comprises the steps that in the process of playing first audio data obtained in a first mode, first equipment obtains second audio data in a second mode, wherein the first mode is different from the second mode;
the first device processing the second audio data;
the first device executes a first instruction if the second audio data includes target audio data that matches the first reference data.
Optionally, the executing of the first instruction by the first device includes:
the first device sends a second instruction to a second device connected with the first device, so that the second device outputs a response result, wherein the response result is matched with the second audio data.
Optionally, the obtaining second audio data in a second manner includes: second audio data of the environment where the first device is located are collected.
Optionally, the method further comprises:
the first device determining the second device from candidate electronic devices connected with the first device;
wherein the determining the second device from the candidate electronic devices connected to the first device comprises: selecting a device that does not perform a response task from candidate devices connected to the first device, the response task matching the target audio data, and determining the device as the second device.
Optionally, the method further comprises:
the first device determining the second device from candidate electronic devices connected with the first device;
wherein the second audio data comprises request data corresponding to the target audio data, and the determining the second device from the candidate electronic devices connected with the first device comprises:
determining a response mode based on the request data; determining a device matching the response pattern from among candidate devices connected to the first device, and determining the device as the second device.
Optionally, the obtaining second audio data in a second manner includes: receiving second audio data of the environment where the second equipment is located, wherein the second audio data are collected by the connected second equipment;
the first device executes first instructions comprising:
and the first equipment outputs a response result, wherein the response result is information corresponding to the second audio data.
Optionally, the receiving, from a connected second device, second audio data of an environment in which the second device is located includes:
and receiving second audio data broadcasted from a second device, wherein the second audio data is transmitted by the second device after judging that the second audio data does not comprise audio data matched with second reference data.
Optionally, the first instruction comprises: a third instruction for adjusting the parameter of the first audio data played by the first device;
the first device executes first instructions comprising:
and the first device executes the third instruction and controls the first audio data to be played according to the playing parameters corresponding to the third instruction.
An information processing apparatus comprising:
the data acquisition unit is used for acquiring second audio data in a second mode in the process of playing first audio data acquired in a first mode, wherein the first mode is different from the second mode;
a processing unit for processing the second audio data;
an execution unit configured to execute the first instruction if the second audio data includes target audio data that matches the first reference data.
An electronic device, comprising:
output means for playing first audio data, the first audio data being data obtained in a first manner;
obtaining means for obtaining second audio data in a second manner, wherein the first manner and the second manner are different;
processing means for processing the second audio data; if the second audio data includes target audio data that matches the first reference data, a first instruction is executed.
According to the technical scheme, the application discloses an information processing method, an information processing device and electronic equipment, wherein the method comprises the steps that in the process that first audio data obtained in a first mode are played by first equipment, second audio data are obtained in a second mode; the first device processes the second audio data and executes the first instruction if the second audio data includes target audio data that matches the first reference data. Therefore, in the process of playing the audio data by the first device, the second audio data is received through the second mode different from the first mode, and then the second audio data is processed, so that the first device can respond to and process other audio data even in the process of playing the audio data, the influence of the playing of the audio data of the device on the processing of other audio data is reduced, and the experience effect of a user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a diagram showing an application environment of an information processing method according to an embodiment of the present application;
FIG. 2 is a flow chart illustrating an information processing method according to an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating an information processing method based on environmental information according to an embodiment of the present disclosure;
fig. 4 shows a signaling interaction diagram of an information processing method provided in an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a voice interaction scenario provided by an embodiment of the present application;
fig. 6 shows a signaling interaction diagram of another information processing method provided in an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram illustrating an information processing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a diagram showing an application environment of an information processing method in one embodiment of the present application, referring to fig. 1, the information processing method is applied to a first device 100. The first device 100 is provided with an information transmission function and an information processing function. The information transfer function of the first device 100 is characterized in that the first device has both input and output functions.
Specifically, the first device may be one of a smart sound box, a smart television device, an intelligent terminal, a tablet computer, and a laptop computer. The first audio data is data played by the first device; the second audio data may include audio data generated by a user corresponding to the first device, and may also include audio data of an environment in which the first device is located; correspondingly, the audio data sent by other devices can also be used.
In one possible implementation, the input functionality of the first device 100 is embodied in being able to obtain audio data in a first manner. Specifically, the first mode may be a wired data acquisition mode, i.e., audio data obtained by the first device 100 having a limited connection with the audio data generation device. For example, the first device is a television device, which may be in a wired video data connection with a video playback switching device, such as a set-top box, and send audio or video data generated directly or indirectly by the set-top box to the television device. The first mode may also be an information reading mode, that is, the first device 100 obtains the audio and video data stored locally in the information reading mode, and in addition, the information reading mode may also be implemented in a wireless connection mode, so that the first device obtains the audio and video data stored in the cloud storage in a wireless data transmission mode.
In another possible implementation, the input function of the first device 100 is embodied in that the first device is capable of receiving information sent by other devices, which may be audio information, video information, or instruction information.
The output function of the first device 100 may represent an output function of the first device for the first audio data, for example, the first audio data is generated from audio information alone, and the output function of the first device may be a playing function for the first audio data. If the first audio data is generated from the video data, the output function of the first device may be a function of playing the video data synchronously with the audio and the video. Correspondingly, the output function of the first device 100 may also characterize the output of the related information generated after the processing of the second audio data, such as the output of the response information or the instruction information.
After the first device 100 obtains the first audio data in the first way, the first audio data is played. The first device 100 also obtains second audio data in a second manner. The second audio data may be captured by the first device 100 or received by the first device. The following embodiments of the present application will specifically describe the manner of acquiring the second audio data.
After the first device 100 obtains the second audio data, the second audio data is processed. As the second audio data may comprise data specific to the first device and may also comprise data relating to the environment in which the first device is located. And the first device processes the second audio data to judge whether the second audio data comprises target audio data matched with the first reference data, and if so, the first device executes the first instruction. The first reference data may be a keyword for the first device; or may be a keyword for other devices, in which case the first device may be understood as a master device in the current network. For example, if the first device has voice interaction functionality, the first baseline data may represent a keyword that is capable of waking up the first device. Thus, the target audio data may be audio data that exactly matches the first reference data, or may be audio data that can characterize the first reference data. For example, if the first reference data is "please start the voice interaction function", the corresponding target audio data may be in the form of "start voice interaction", "voice interaction start", and the like.
Correspondingly, the first instruction is an instruction generated corresponding to the target audio data, and may be an instruction directly responding to a function corresponding to the target audio data, or may be an optimization or control instruction for implementing the function corresponding to the target audio data. Specifically, the following embodiments of the present application will be specifically described.
Referring to fig. 2, a schematic flow chart of an information processing method provided in an embodiment of the present application is shown, where the method may include:
s201, in the process that the first device plays the first audio data obtained in the first mode, second audio data are obtained in the second mode.
The first device obtains the second audio data during the process of playing the first audio data. However, a first manner in which the first device obtains the first audio data is different from a second manner in which the second audio data is obtained. For example, when a multimedia application is installed in the first device and then the multimedia application is opened, the first device may acquire the first audio data in a first manner. Specifically, the first device may acquire the first audio data according to an audio data acquisition path indicated in the instruction information received by the multimedia application. If the instruction information indicates that the multimedia program plays the live broadcast data, the live broadcast data of the live broadcast set top box needs to be acquired through a video limited interface, and the live broadcast data is first audio data. If the instruction information indicates that the multimedia program plays the stored audio, the first device is required to obtain the audio through the local storage path, and the obtained audio is first audio data. After the first audio data is obtained by the first device, the first audio data is played, and the specific playing format of the first audio data is matched with the data format of the obtained first audio data.
In the process of playing the first audio data by the first device, the first device is not only in a playing state, but also can obtain the second audio data, and the way of obtaining the second audio data is different from the way of obtaining the first audio data which is currently being played. For example, the first device may collect second audio information of the current environment, or may receive second audio data sent by other devices. The second audio data may be audio information associated with the first device, e.g., the second audio data may include a wake up word for a certain voice interaction function of the first device.
S201, the first equipment processes the second audio data.
S203, if the second audio data comprises the target audio data matched with the first reference data, the first device executes a first instruction.
The first device processes the second audio data to determine whether the second audio data contains the target audio data. The target audio data is data matched with first reference data, and the first reference data may be a keyword for waking up a certain application or function of the first device, or a keyword for another device in communication connection with the first device. When the first device processes the second audio data, that is, after it is recognized that the second audio data includes the target audio data, the first device executes the first instruction. The first instruction is an instruction matched with the target audio data, and if the target audio data is a wakeup word for a certain voice function of the first device, the first instruction is to start an application corresponding to the voice function. If the target audio data includes not only a wakeup word for a certain voice function of the first device but also request information corresponding to the awakened target audio data, the first instruction is an instruction for controlling the first device to start the voice function and then outputting response information corresponding to the request information through the voice function.
For example, the wake word for the voice interaction application of the first device is "start voice assistant". When the second audio data obtained by the first device is "start voice assistant, today's trip", the first device starts its voice interaction application and outputs the information "2:30 pm attend meeting at meeting center" matching the today's trip.
According to the information processing method provided by the embodiment of the application, the obtained second audio data is processed while the first audio data is played through the first device, and if the second audio data comprises the target audio data matched with the first reference data, the first device executes the first instruction, so that the first device can respond to and process other audio data even in the process of playing the audio data, the influence of the playing of the audio data of the device on the processing of other audio data is reduced, and the experience effect of a user is improved.
The first device executes the first instruction if the second audio data includes target audio data that matches the first reference data. It should be noted that, in the embodiment of the present application, the first instruction may represent an operation instruction for the first device, and may also represent an instruction generated by the first device for transmission. In one possible implementation, the executing of the first instruction by the first device includes:
and the first equipment sends a second instruction to second equipment connected with the first equipment, so that the second equipment outputs a response result.
The second device is connected with the first device, and has an information transmission function, and specifically, may also have an information processing function, for example, the second device may respond to the relevant audio data in the second audio data instead of the first device; the second device may also output information that was transmitted by the first device. I.e., the response result finally output by the second device matches the second audio data.
The following describes a process in which the first device sends the second instruction to the second device, so that the second device outputs a response result.
In order to enable a user of an environment where the first device is located to obtain a better experience effect, user information of the environment where the first device is located needs to be analyzed. Referring to fig. 3, an information processing method performed based on an environment in which a first device is located according to an embodiment of the present application is shown, where the method includes:
s301, in the process that first audio data obtained in a first mode are played by first equipment, second audio data are obtained in a second mode;
s302, the first equipment processes second audio data;
s303, if the second audio data comprise target audio data matched with the first reference data, the first equipment acquires the environment information where the first equipment is located;
s304, the first device generates a second instruction based on the environment information;
s305, the first device sends a second instruction to a second device connected with the first device, so that the second device outputs a response result.
In this embodiment, the first instruction executed by the first device includes a second instruction, and if the second audio data includes target audio data matched with the first reference data, the first device needs to acquire environment information where the first device is located, where the environment information may represent image information of a scene where the first device is located, or may represent infrared sensing information of the scene where the first device is located, and the like. It should be noted that the first device may acquire an image of an environment where the first device is located by using an image acquisition component, such as a camera, which is provided in the first device, and then obtain environment information where the first device is located based on the image of the environment where the first device is located. For another example, the first device itself does not have an image capturing component, and may send the environment information capturing instruction to a device connected to the first device and having image information capturing function, such as a monitoring device, where the monitoring device captures environment information of the first device, and then transmits the captured environment information of the first device to the first device, and the first device performs subsequent processing based on the environment information.
In one possible implementation manner, the generating, by the first device, a second instruction based on the environment information includes:
and the first equipment analyzes the environment information to obtain whether the associated information of the information receiver of the first equipment meets a first preset condition, and if so, a second instruction is generated.
Wherein the information provider of the first device receives a living organism representing an environment in which the first device is located, the living organism being capable of receiving first audio data output by the first device or subsequent response information. The first preset condition may represent a number condition of the information receivers, and may also represent distance information between the information receivers and the first device. If the first preset condition represents the number condition of the information receivers, the information related to the information receivers of the first device, which is obtained by the first device, is the number information of the information receivers of the first device, that is, when the number of the information receivers of the first device is greater than the preset number threshold, the first device generates the second instruction. If the first preset condition represents the distance information between the information receiver and the first device, the first device obtains the association information of the information receiver of the first device as the distance information between the information receiver and the first device, and the receiver information with the distance less than or equal to the set distance threshold can be obtained through analysis of the distance information. Based on the analysis of the environment information where the first device is located, the real situation of the receiver who actually pays attention to the information output of the first device can be obtained, and therefore the problem that the experience effect of the information receiver is poor if the first device interrupts the current playing of the first audio data is solved. It is then determined whether a second instruction is to be generated based on the determination result. If the number of the information receivers is large, a second instruction can be generated and sent to the second equipment, so that the response result can be output by the second equipment, and the influence on the experience effect of the receivers is reduced.
For example, if the first device is an intelligent television, the information receiver corresponding to the first device is a television viewer, and if the environment information of the intelligent television is analyzed, a larger number of television viewers is obtained. If the second audio data is recognized to include the target audio data matched with the first reference data, and the first reference data is a wake-up word aiming at the voice interaction function of the smart television, the target audio data is recognized to indicate that the smart television needs to wake up the voice interaction function, and the smart television may perform voice interaction with a user subsequently, and if voice information is to be output, the first audio data terminal which is played by the smart television is ensured to be ensured. Due to the fact that a plurality of television viewers are available, if the output of the first audio data is interrupted, the experience effect of other non-interactive viewers is affected. And the smart television generates a second instruction at the moment, sends the second instruction to the smart sound box, and outputs a response result matched with the second audio data by the smart sound box, for example, the smart sound box completes a voice interaction process with the user after the voice interaction function of the smart television. Therefore, the intelligent sound box responds to the voice interaction process after the intelligent television is awakened without interrupting the playing process of the first audio data of the intelligent television, so that the interaction user can obtain the interaction information based on the voice assistant, and meanwhile, the television viewer can continuously obtain the first audio data played by the television.
In another possible implementation manner, before the first device sends the second instruction to the second device connected to the first device, the information processing method further includes:
the first device determines a response mode of the second device;
and the first equipment sends a second instruction matched with the response mode to a second equipment connected with the first equipment, so that the second equipment outputs a response result matched with the response mode.
The first device needs to determine a response mode of the second device, where the response mode represents a mode in which the second device needs to respond to information, and includes a mode in which the second device only needs to output response information, and also includes a mode in which the second device needs to generate response information and output the response information. If the response mode of the second device is the output mode that only the response information needs to be output, the processor of the first device or the processor of the processing apparatus connected to the first device may complete the process of generating the response information, and then send the response information to the second device, and output the response information. And if the response mode of the second equipment is a mode for generating response information and outputting the response information, the first equipment sends the corresponding request information to the second equipment, and the second equipment processes the request information to obtain the response information and outputs the corresponding information.
Specifically, the request information of the user is received after the function or application corresponding to the first reference information of the first device is awakened, and the corresponding information processing process is completed. It may be that the information is processed by the first device and then response information corresponding to the processing result is output by the second device. Or when the second device has a processing function matched with the first device, the first device directly sends the request information of the user to the second device, the processor of the second device completes the processing of the request information, and corresponding response information is output.
For example, after the voice interaction function of the first device is woken up, a user who needs to perform voice interaction with the first device may output corresponding interaction request information, and then expect to obtain corresponding feedback information through the voice interaction function of the first device. After the first device receives the voice interaction request of the user, a voice interaction processing module of the first device generates response information corresponding to the voice interaction request and sends the response information to the second device, so that the second device outputs the response information. Or the land body equipment sends the received voice request information of the user to the second equipment, and the second equipment processes the response information corresponding to the voice interaction request and outputs the response information so that the user obtains the response information.
Correspondingly, the determining, by the first device, the response mode of the second device includes:
the first device determines a response mode of the second device based on the second audio data.
After the first device identifies that the second audio data comprises the target audio data matched with the first reference data, the second audio data is analyzed to obtain specific content of the second audio data, and a response mode of the second device is determined according to the content of the second audio data.
If the second audio data only comprises the target audio data, determining that the second equipment is in a first response mode;
and if the second audio data comprises target audio data and request information matched with the target audio data, determining that the second equipment is in a second response mode.
The first response mode represents a mode independently responded by the second equipment, namely the second equipment processes information and outputs response information; the second response mode represents a mode in which the second device outputs according to the response result obtained by the first device.
Taking the first reference data as the voice assistant wake-up word of the first device as an example, if the second audio data only includes the wake-up word, the second device may respond to the wake-up word, and if the first device is a television, the second device is a sound box, and the wake-up word is "hello, television", the sound box outputs "hello". I.e. the second device simply responds to the wake-up word alone. If the second audio data includes not only the wakeup word but also the request information, the first device may process the request information to obtain response information corresponding to the request information, and then send the response information to the second device, and the second device outputs the response information.
It should be noted that, after the second device receives the second instruction sent by the first device, that is, after the function or application corresponding to the first reference data is started by the first device, the second device outputs response information, as can be seen from the foregoing embodiment, the response information may be response information for the target audio data or response information for the second audio data. If the function corresponding to the first datum data is used as a voice interaction function, the second device can independently respond in the process of voice interaction with the user after the function is started. That is, the second device receives the voice information of the user, processes the voice information to obtain response information, and outputs the response information. Correspondingly, the second device may also be an input and output interface only for information. The second device receives the voice information of the user, sends the voice information of the user to the first device, the first device processes the voice information to obtain response information, the response information can be output by the first device, or the response information can be output to the second device by the first device, and the second device outputs the response information until the interaction process with the user is completed.
In a possible implementation manner, if the second audio data includes target audio data and request information matched with the target audio data, determining that the second device is in the second response mode includes:
and analyzing the request information, and determining that the second equipment is in a second response mode if the analysis result meets a second preset condition.
The second preset condition may be a condition for judging a response result, or a condition determined according to an association request probability of the request information. For example, if the request message is a message that can obtain a unique output result, the first device may process the request message and output the response message to the second device for output. However, if the request information represents one association request pattern, the request information may be processed and output by the second device in order to better balance the processing resources of the first device.
For example, after the voice interaction function of the first device is started, the request message is "what is now? It can be seen that the request information is a closed request, that is, only one response needs to be output, the first device sends the current time to the second device, and the second device outputs a response result, where the response result is '17: 30'.
If the request information is identified as an associated request, such as "how to download the update information", and a unique response result cannot be obtained for the request, because it is not possible to confirm what update information of the application is downloaded, nor to determine which version of the update information, the interactive process is a more complicated interactive process, and the request information can be processed and output by the second device. The second device will output "ask for which application to update" and then "ask for your current model of the mobile phone device", etc., until the interaction process with the user is completed.
And the first equipment sends a second instruction to the second equipment connected with the first equipment, so that the second equipment outputs a response result. In general, in an application scenario, a first device may be connected to a plurality of devices, and a second device that best matches the instructions of the first device needs to be determined. Correspondingly, in order to implement a process of sending an instruction from a first device to a second device, in an embodiment of the present application, the method further includes:
the first device determines a second device from the candidate electronic devices connected to the first device.
The first device determining a second device from candidate electronic devices connected with the first device, comprising:
the first device determines a second device among candidate electronic devices connected to the first device based on a processing mode matched with the first reference data.
The processing mode matched with the first reference data can represent a processing process mode of the audio data, an output response mode of the audio data and an execution state mode of the audio data.
In a possible implementation manner, the processing modes represent processing modes of applications or functions corresponding to the first reference data and having the same attribute, for example, the first reference data corresponds to a voice assistant wakeup word of the first device, and when the second device is determined, devices of the same voice assistant, for example, devices of the same brand, specifically, the voice assistant of the first device is a, and the voice assistant connected to the first device is also a, need to be selected, which may enable the device to process the obtained voice information based on the same processing mode, so that the interaction process of the user with the first device may be transferred to the interaction process with the second device without feeling. Or may be a voice assistant device with the same version information, and the embodiments of the present application are not explained one by one.
In another possible implementation manner, the processing mode characterizes an execution attribute of the response task, and then the first device selects a device that does not execute the response task from candidate devices connected to the first device, and determines the device as the second device.
Taking a smart home scene as an example, the first device may be a master control device in a network corresponding to the smart home, and may be connected to multiple devices, such as a smart television, a smart speaker, and a projector.
The response task is matched with the target audio data. For example, the target audio data is a wake word initiated by a voice assistant of the first device, and the response task corresponding to the target audio data is an audio output task. Therefore, a device that does not perform an audio output task is selected from among the candidate devices connected to the first device, and is determined as the second device. It should be noted that the second device may have other forms of information output when it is determined, but as long as the response task matching the target audio data is not performed. For example, the device connected to the first device includes an electronic photo frame, and the electronic photo frame has an audio player, the electronic photo frame displays an electronic photo stored in the electronic photo frame in real time, and the device that needs to be selected needs to perform audio playing and the device does not currently perform audio playing, so the electronic photo frame may be selected as the second device, because the electronic photo frame only performs image output currently, and does not affect the task of performing audio output.
In yet another possible implementation, the processing mode characterizes a response mode determined for the request data. Determining the second device from the candidate electronic devices connected to the first device if the second audio data includes request data corresponding to the target audio data, including:
determining a response mode based on the request data; and determining a device matched with the response mode from the candidate devices connected with the first device, and determining the device as the second device.
In this embodiment, the second audio data includes, in addition to the target audio data, request data corresponding to the target audio data, where the request data may be a request issued by a corresponding user after executing a function corresponding to the target audio data, for example, if a voice interaction function of the first device is started, a response mode corresponding to the request data is a device that needs to output corresponding response data in an audio form, and an audio playing function needs to be selected from among the candidate devices. For another example, if the requested data is that target information needs to be displayed, a device having a display screen or a device capable of generating a display screen by projection needs to be selected from the candidate devices.
Of course, if the data of the associated device corresponding to the application that can be started or controlled for different target audio data is stored in the first device, the second device can be directly determined in the candidate devices through the data of the associated device. For example, if it is recognized that the second audio data includes the start audio of the first application of the first device, the associated device described in the first application configuration information may be obtained, and if the associated device is device a, device a is determined as the second device, and the associated device may represent a candidate device in a case where a certain function of the first device is occupied or a certain output module is failed.
The information processing method of the present application is described below with the first device acquiring the second audio data. Referring to fig. 4, a signaling interaction diagram of an information processing method according to an embodiment of the present application is shown. The application scenario corresponding to fig. 4 includes a first device and a second device, where the second device is a device that performs communication connection with the first device. The first device and the second device may access to the same WIFI (Wireless Fidelity), that is, the same lan, so as to implement communication connection and further data interaction between the first device and the second device. Correspondingly, the first device and the second device may also be connected in other communication manners, and the embodiments of the present application are not described one by one.
S401, in the process that first audio data are played by first equipment, collecting second audio data of the environment where the first equipment is located;
s402, the first equipment processes second audio data;
s403, if the second audio data comprises target audio data matched with the first reference data, the first device generates a second instruction;
s404, the first equipment sends a second instruction to the second equipment;
s405, the second device executes a second instruction and outputs a response result.
The first audio data is obtained in a first manner, and a specific form of the first manner can be referred to the description of the corresponding embodiment in fig. 2, which is not described herein again.
In the embodiment of fig. 4, the capturing of the second audio data of the environment of the first device is performed by the first device, but may also be performed by the second device, but it is required to ensure that the second device is in the same environment as the first device. The second audio data is the audio data of the environment where the first device is located, and the first device is in the state of playing the first audio data, so that the second audio data includes the first audio data and also includes the environment audio except the first audio data. The environmental audio may include audio data output by the user for the first device, audio data output by the other device, or audio data output by the user for the other device.
In this embodiment, the first device processes the second audio data to identify whether the second audio data includes target other audio data that matches the first reference data, e.g., the first reference data characterizes a wake up word to a target function of the first device. And when the first device processes the second audio data, it needs to identify whether the second audio data includes the awakening word, and if so, the first device generates a second instruction and sends the second instruction to the second device, so that the second device outputs a response result according to the second instruction, and the response result is matched with the second audio data. Corresponding to the above example, that is, the second audio data includes a wake word, in this case, the second instruction may be to instruct the second device to output a response result corresponding to the target function corresponding to the wake word instead of the first device. In a possible implementation manner, the second device may output the response result to the first device, and the first device outputs the response result, or the first device designates a corresponding target device to output the response result to the receiving party.
Referring to fig. 5, a schematic diagram of a voice interaction scene provided by an embodiment of the present application is shown. In fig. 5, the first device is a smart tv 501, and the second device is a smart sound box 502. The smart television 501 has an application program corresponding to the voice assistant, that is, it can implement a voice interaction function with the user through the voice assistant. And the wake-up word for the voice assistant, i.e., the first datum, is "hello, tv".
In the scenario shown in fig. 5, the smart tv 501 is playing a target video segment, that is, the smart tv 501 plays the picture information of the target video segment through its display screen, and plays the audio information of the target video segment through its speaker. At this time, the smart television 501 collects audio data of its environment, where the audio data includes audio information that it is playing and audio information output by the user 503. Then, the smart tv 501 needs to determine whether the environmental audio data collected by the smart tv includes the first reference data corresponding to the smart tv starting voice assistant. If the audio information output by the user 503 is "hello, tv. Please appeal my weather condition today ", and then the smart television 501 recognizes that the audio information output by the user includes" hello, tv ", and starts an application program corresponding to the voice assistant of the smart television 501. Since the smart tv 501 is outputting the target video segment, in order to enable the user to obtain clearer request response information, an instruction indicating that the smart speaker 502 outputs a corresponding response result may be generated, so that the smart speaker 502 outputs response information "today is fine, the highest temperature is 15 ℃, and the lowest temperature is 2 ℃" corresponding to the request information "weather condition of today" sent by the user 503 "
It should be noted that, in the embodiment corresponding to fig. 5, the smart television may process the request information of the user through the corresponding voice assistant function, then generate a response result, and output the response result to the smart speaker, so that the smart speaker outputs the response result. At this moment, the intelligent sound box is equivalent to an executor of information output corresponding to the voice assistant function of the intelligent television. In another possible implementation manner, the smart sound box also has a function of performing voice interaction with the user, and after the smart television recognizes the wake-up word of the voice assistant, a voice interaction function control instruction for starting the smart sound box is generated, and then the control instruction is sent to the smart sound box, so that the smart sound box starts the voice interaction function of the smart sound box, and then the smart sound box completely replaces the smart television to perform voice interaction with the user, that is, the smart sound box receives the voice information of the user and outputs a response result matched with the voice information.
Referring to fig. 6, a signaling interaction diagram of another information processing method provided in an embodiment of the present application is shown. The application scenario corresponding to fig. 6 includes a first device and a second device, where the second device is a device that performs communication connection with the first device. The interaction process of the first device and the second device is as follows:
s601, first equipment plays first audio data;
s602, the second equipment collects second audio data of the environment where the second equipment is located;
s603, the second equipment sends the second audio data to the first equipment;
s604, the first equipment receives the second audio data and processes the second audio data;
and S605, if the second audio data comprises the target audio data matched with the first reference data, the first device outputs a response result.
It can be seen that in this embodiment, the second device connected to the first device performs the acquisition of the second audio data, and the first device serves as a receiving end of the second audio data. The second audio data is audio data of an environment in which the second device is located. The second device may be in the same environment as the first device or may be different. After the second device collects the second audio information of the environment where the second device is located, the second device may be limited due to some functions of the second device or cannot identify the second audio information, and then the second device sends the second audio information to the first device, so that the first device processes the second audio information. For example, it may be identified by the first device whether or not target audio data that matches the first reference data is included in the second audio data. In this embodiment, the first reference data may be for the first device or for the second device.
And after the first equipment processes the second audio data, the second audio data comprises target audio data, and the first equipment executes the first instruction to output a response result for the first equipment. The response result is information corresponding to the second audio data. Since the second audio data includes the target audio data, that is, the response result may correspond to the target audio data. For example, the target audio data corresponds to a certain function of the second device, and the response result may be control information for controlling the second device to turn on the function. The second audio data further comprises request information, the request information may characterize the identification request, i.e. the second device requests the first device to identify whether the second audio information comprises the target audio information, the output response result may be a corresponding identification result, and correspondingly, the identification result may be sent to the second device. For another example, the request information may be request information for interacting with the first device-specific application, and then the first device may output response information matching the request information.
For example, the first device is a smart television, and the second device is a smart speaker. In the process of video playing of the intelligent television, as the task of video playing needs to respond, the intelligent sound box can collect audio data of the environment where the intelligent sound box is located, so that the task executed by the equipment in the current environment is more balanced. After the audio data is collected by the intelligent sound box, the intelligent sound box can send the audio data to the intelligent television, the audio data is processed by the intelligent television, if the audio data comprises target audio data, the intelligent television can output a corresponding response result, and if the feedback information fed back according to the information collected by the intelligent sound box is output, even if the target audio data is specific to the intelligent sound box, the intelligent television outputs the response to avoid the influence of the video being played by the intelligent television.
In some embodiments, receiving audio data from an environment in which the second device is located captured by the connected second device comprises:
and receiving second audio data broadcasted from a second device, wherein the second audio data is transmitted by the second device after the second audio data does not comprise audio data matched with the second reference data.
Wherein the second reference data characterizes data for the second device;
or the second audio data is transmitted after the second device judges that the second audio data does not include audio data matched with the second reference data and the second audio data includes audio data matched with the third reference data.
And if the second audio data is transmitted by the second device after the second audio data does not comprise the audio data matched with the second reference data, wherein the second reference data represents the data aiming at the second device. After the second device obtains the second audio data, the second device identifies the second audio data, and if the second audio data does not include second reference data for the second audio data, the second reference data may be data for starting the second device or data for a specific function or application of the second device. The second device broadcasts the second audio to other electronic devices connected to the second device, so that the other electronic devices can process or respond to the second audio data, and the second audio data is prevented from being ignored.
If the trigger application corresponding to the second reference data is the voice interaction application, in the embodiment corresponding to the scenario, the second device may include the voice interaction application, and the wakeup word corresponding to the voice interaction application of the second device is the second reference data. After the second device acquires the second audio data of the environment where the second device is located, the second device broadcasts the second audio data to the devices connected with the second device after judging that the second audio data does not include the audio data matched with the second reference data, namely the second audio data includes the first audio data. The second audio data may be identified as containing target data corresponding to the first reference data, for example, by the first device continuing to identify the second audio data. That is, after the second device collects the second audio data, since the audio data corresponding to the second reference data is not recognized, the voice interaction application is not woken up, the second device broadcasts the second audio data to other devices, and the other devices recognize whether a wake-up word corresponding to the voice interaction function of the other devices exists. The problem that the voice interaction request cannot be responded to in time is avoided, excessive processing resources of equipment for executing the playing task cannot be occupied, and mutual influence of tasks of collecting data and playing data is avoided.
And if the second audio data is transmitted after the second device judges that the second audio data does not comprise the audio data matched with the second reference data and the second audio data comprises the audio data matched with the third reference data. The second device analyzes the second audio data after obtaining the second audio data, continues to analyze the second audio data after obtaining the second audio data without including the second reference data of the second device, and judges whether the second audio data includes third reference data, wherein the third reference data can represent common reference data of a certain device group, and can also represent reference data of devices which are stored by the second device and are associated with the third reference data. And broadcasting the second audio data to the electronic equipment matched with the third reference data by the second equipment when broadcasting the second audio data.
For example, the baseline data characterizes a wake-up word to a voice assistant of the device. When the second device analyzes the second audio data, the second audio data does not include the wake-up word of the second device, and the wake-up word obtained through further analysis includes the wake-up word of a specific voice assistant, such as the wake-up word corresponding to the same type of voice assistant. Assuming that the electronic device a, the electronic device B, and the electronic device C connected to the second device each include the specific voice assistant, the second device broadcasts the second audio data to the electronic device a, the electronic device B, and the electronic device C when broadcasting the second audio data.
In another possible implementation manner, the second device may store a wakeup word of each device connected thereto or a wakeup word of a target function corresponding to each device. The second device can now be understood as the master device in the current local area network. After the audio data matched with the second reference data is identified, the audio data is not included in the second audio, and a preliminary judgment can be performed, that is, the awakening words of which devices are possibly included in the second audio data are judged, then the second audio data is broadcasted to the corresponding devices, and further, the received devices accurately identify the second audio data. For example, the first device receives the second audio data and then identifies whether it includes target audio data that matches its corresponding first reference data.
In another embodiment of the present application, the first instructions include: and a third instruction for adjusting the parameter of the first audio data played by the first device.
Wherein the first device executes a first instruction comprising:
and the first equipment executes the third instruction and controls the first audio data to be played by the playing parameter corresponding to the third instruction.
In this embodiment, when the first device recognizes that the second audio data includes target audio data matching the first reference data, since the first device or another device connected to the first device needs to output a response result to a function corresponding to the target audio data, in order to avoid an influence of a response environment on the first audio data being played by the first device, the first device may generate a third instruction for controlling a playing parameter of the first audio data while processing the second audio data, so as to change a playing mode of the first audio data. The first device may also receive a third instruction generated when the device connected to the first device outputs a response result, so that the first device executes the received third instruction to control the first audio data to be played according to the playing parameter corresponding to the third instruction. The third instruction may include an audio play volume down instruction, a pause instruction, or other instruction that matches the current play mode of the first audio data.
Specifically, the first device processes the second audio data to generate a third instruction. At this time, the second audio data may be second audio data of an environment in which the second device is located, which is captured by the second device. The second audio data is sent to the first device by the second device, when the first device identifies whether the second audio data includes target audio data matched with the first reference data, the second audio data can be compared with the voiceprint information of the played first audio data, and if the voiceprint information included in the second audio data is matched with the voiceprint information of the first audio data, a third instruction can be generated, wherein the third instruction is used for instructing to reduce the playing volume of the first audio data.
For example, in the process of playing audio, the first device receives audio information including a wakeup word of the voice interaction function of the first device, which is sent by the second device, and wakes up the voice interaction function of the first device when the first device processes the audio information and recognizes the wakeup word. And comparing the audio information with the voiceprint characteristics of the audio played by the first device to obtain that the audio information comprises the audio played by the first device, and then receiving the voice information of the user through the voice interaction function of the first device and outputting a response result matched with the voice information of the user. Further, after the first device completes the voice interaction with the user, it may generate a volume recovery instruction, that is, recover the volume of the audio data that the first device has played.
There is also provided in an embodiment of the present application an electronic device, see fig. 7, including:
an output device 701 for playing first audio data, the first audio data being obtained in a first manner;
obtaining means 702 for obtaining second audio data in a second manner, wherein the first manner and the second manner are different.
Processing means 703 for processing the second audio data; if the second audio data includes target audio data that matches the first reference data, a first instruction is executed.
The component structure of the output device 701 in the electronic device is matched with the format of the played first audio data, and if the played first audio data is video data, the corresponding output device 701 comprises a display component and an audio playing component; if the first audio data to be played includes only audio, the corresponding output device 701 may further include an audio playing component.
The obtaining apparatus 702 in the electronic device may include a communication component or an acquisition component, and when the obtaining apparatus 702 includes a communication component, the electronic device may establish a communication connection with another device through the communication component, receive second audio data transmitted by the other device, and may perform transmission of related instructions or other information by using the communication component. If the obtaining apparatus 702 includes a collecting component, the electronic device can collect second audio data of the environment in which the electronic device is located through the collecting component.
The processing device 703 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The present invention can also be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable gate Array), and a PLA (Programmable Logic Array). The Processing device may also include a master processor, which may be a processor for Processing data in an awake state, also referred to as a Central Processing Unit (CPU), and a slave processor, which is a low power processor for Processing data in a standby state. In some embodiments, the processor may be integrated with an audio processor that may process audio that needs to be identified. In some embodiments, the processor may also include an AI (Artificial Intelligence) processor. The AI processor is configured to process computational operations related to machine learning. For example, an AI processor may be used in embodiments of the present application to identify voiceprint characteristics of audio data.
Corresponding to the specific implementation manner of the present application, in the embodiment of the present application, the electronic device may further include an audio data component, a display component, a sound receiving component, a connection component, various auxiliary circuits, a data bus, and other structures.
In some embodiments, when the processing apparatus 703 in the electronic device executes the first instruction, it may be that the processing apparatus sends a second instruction to a second device connected thereto, so that the second device outputs a response result, wherein the response result matches the second audio data.
On the basis of the above embodiment, the obtaining apparatus 702 includes an acquisition component;
the acquisition component acquires second audio data of the environment where the electronic equipment is located.
Correspondingly, the processing device 703 is further configured to determine a second device from the candidate electronic devices connected to the electronic device.
Specifically, the determining the second device from the candidate electronic devices connected to the electronic device includes:
selecting a device that does not perform a response task from candidate devices connected with the electronic device, the response task matching the target audio data, and determining the device as the second device.
Or, when the second audio data includes request data corresponding to the target audio data, determining the second device from candidate electronic devices connected to the electronic device, including:
determining a response mode based on the request data; and determining a device matched with the response mode from candidate devices connected with the electronic device, and determining the device as the second device.
In some embodiments, the obtaining means 702 includes a communication component, through which the electronic device is connected with the second device and receives second audio data of an environment in which the second device is located, the second audio data being collected by the connected second device;
correspondingly, the processing device 703 is specifically configured to:
and outputting a response result, wherein the response result is information corresponding to the second audio data.
On the basis of the foregoing embodiment, the communication component is further configured to receive second audio data broadcasted from a second device, where the second audio data is transmitted by the second device after determining that the second audio data does not include audio data matching second reference data.
Optionally, the first instruction executed by the processing device 703 includes: a third instruction for adjusting the parameter of the first audio data played by the electronic equipment;
correspondingly, the processing device 703 is specifically configured to:
and executing a third instruction, and controlling the first audio data to be played according to the playing parameters corresponding to the third instruction.
For the functions and execution processes of each component of the electronic device, please refer to any one of the above-mentioned information processing methods, and the methods and steps associated with the information processing methods, which are not described in detail herein.
There is also provided in an embodiment of the present application an information processing apparatus applied to a first device, referring to fig. 8, the information processing apparatus including:
a data acquisition unit 801 configured to acquire second audio data in a second manner during playing of first audio data acquired in a first manner, where the first manner is different from the second manner;
a processing unit 802 for processing the second audio data;
an execution unit 803, configured to execute the first instruction if the second audio data includes target audio data matching the first reference data.
Further, the execution unit is specifically configured to:
and sending a second instruction to a second device connected with the first device, so that the second device outputs a response result, wherein the response result is matched with the second audio data.
Optionally, the data acquiring unit includes a collecting subunit, and the collecting subunit is configured to collect second audio data of an environment where the first device is located.
Optionally, the apparatus further comprises:
a device determination unit configured to determine the second device from among candidate electronic devices connected to the first device;
wherein the device determination unit is specifically configured to: selecting a device that does not perform a response task from candidate devices connected to the first device, the response task matching the target audio data, and determining the device as the second device.
When the second audio data includes request data corresponding to the target audio data, the device determining unit is specifically configured to:
determining a response mode based on the request data; determining a device matching the response pattern from among candidate devices connected to the first device, and determining the device as the second device.
Optionally, the data obtaining unit includes a receiving subunit, where the receiving subunit is configured to receive second audio data of an environment where a second device is located, where the second audio data is acquired by a connected second device;
the execution unit is specifically configured to:
and outputting a response result, wherein the response result is information corresponding to the second audio data.
On the basis of the foregoing embodiment, the receiving subunit is specifically configured to:
and receiving second audio data broadcasted from a second device, wherein the second audio data is transmitted by the second device after judging that the second audio data does not comprise audio data matched with second reference data.
Optionally, the first instruction comprises: a third instruction for adjusting the parameter of the first audio data played by the first device;
the execution unit is specifically configured to:
and executing the third instruction, and controlling the first audio data to be played according to the playing parameters corresponding to the third instruction.
An embodiment of the present application provides a storage medium having a program stored thereon, which when executed by a processor implements the information processing method.
The embodiment of the application provides a processor, wherein the processor is used for running a program, and the information processing method is executed when the program runs.
The present application also provides a computer program product adapted to perform a program for initializing any of the steps of the information processing method as described above when executed on a data processing device.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic or optical disk, or various other media that can store program code.
The emphasis of each embodiment in the present specification is on the difference from the other embodiments, and the same and similar parts among the various embodiments may be referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An information processing method comprising:
the method comprises the steps that in the process of playing first audio data obtained in a first mode, first equipment obtains second audio data in a second mode, wherein the first mode is different from the second mode;
the first device processing the second audio data;
the first device executes a first instruction if the second audio data includes target audio data that matches the first reference data.
2. The method of claim 1, the first device executing the first instruction comprising:
the first device sends a second instruction to a second device connected with the first device, so that the second device outputs a response result, wherein the response result is matched with the second audio data.
3. The method of claim 2, the obtaining second audio data in a second manner, comprising: second audio data of the environment where the first device is located are collected.
4. The method of claim 2, further comprising:
the first device determining the second device from candidate electronic devices connected with the first device;
wherein the determining the second device from the candidate electronic devices connected to the first device comprises: selecting a device that does not perform a response task from candidate devices connected to the first device, the response task matching the target audio data, and determining the device as the second device.
5. The method of claim 2, further comprising:
the first device determining the second device from candidate electronic devices connected with the first device;
wherein the second audio data comprises request data corresponding to the target audio data, and the determining the second device from the candidate electronic devices connected with the first device comprises:
determining a response mode based on the request data; determining a device matching the response pattern from among candidate devices connected to the first device, and determining the device as the second device.
6. The method of claim 1, the obtaining second audio data in a second manner, comprising: receiving second audio data of the environment where the second equipment is located, wherein the second audio data are collected by the connected second equipment;
the first device executes first instructions comprising:
and the first equipment outputs a response result, wherein the response result is information corresponding to the second audio data.
7. The method of claim 6, the receiving second audio data collected from a connected second device of an environment in which the second device is located, comprising:
and receiving second audio data broadcasted from a second device, wherein the second audio data is transmitted by the second device after judging that the second audio data does not comprise audio data matched with second reference data.
8. The method of claim 1, the first instruction comprising: a third instruction for adjusting the parameter of the first audio data played by the first device;
the first device executes first instructions comprising:
and the first device executes the third instruction and controls the first audio data to be played according to the playing parameters corresponding to the third instruction.
9. An information processing apparatus comprising:
the data acquisition unit is used for acquiring second audio data in a second mode in the process of playing first audio data acquired in a first mode, wherein the first mode is different from the second mode;
a processing unit for processing the second audio data;
an execution unit configured to execute the first instruction if the second audio data includes target audio data that matches the first reference data.
10. An electronic device, comprising:
output means for playing first audio data, the first audio data being data obtained in a first manner;
obtaining means for obtaining second audio data in a second manner, wherein the first manner and the second manner are different;
processing means for processing the second audio data; if the second audio data includes target audio data that matches the first reference data, a first instruction is executed.
CN201911423707.2A 2019-12-31 2019-12-31 Information processing method and device and electronic equipment Active CN111210819B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911423707.2A CN111210819B (en) 2019-12-31 2019-12-31 Information processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911423707.2A CN111210819B (en) 2019-12-31 2019-12-31 Information processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111210819A true CN111210819A (en) 2020-05-29
CN111210819B CN111210819B (en) 2023-11-21

Family

ID=70784986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911423707.2A Active CN111210819B (en) 2019-12-31 2019-12-31 Information processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111210819B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160205349A1 (en) * 2015-01-12 2016-07-14 Compal Electronics, Inc. Timestamp-based audio and video processing method and system thereof
CN107734370A (en) * 2017-10-18 2018-02-23 北京地平线机器人技术研发有限公司 Information interacting method, information interactive device and electronic equipment
CN108156497A (en) * 2018-01-02 2018-06-12 联想(北京)有限公司 A kind of control method, control device and control system
CN108899024A (en) * 2018-06-29 2018-11-27 联想(北京)有限公司 A kind of audio-frequency processing method, electronic equipment and server
CN110459221A (en) * 2019-08-27 2019-11-15 苏州思必驰信息科技有限公司 The method and apparatus of more equipment collaboration interactive voices
CN110534108A (en) * 2019-09-25 2019-12-03 北京猎户星空科技有限公司 A kind of voice interactive method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160205349A1 (en) * 2015-01-12 2016-07-14 Compal Electronics, Inc. Timestamp-based audio and video processing method and system thereof
CN107734370A (en) * 2017-10-18 2018-02-23 北京地平线机器人技术研发有限公司 Information interacting method, information interactive device and electronic equipment
CN108156497A (en) * 2018-01-02 2018-06-12 联想(北京)有限公司 A kind of control method, control device and control system
CN108899024A (en) * 2018-06-29 2018-11-27 联想(北京)有限公司 A kind of audio-frequency processing method, electronic equipment and server
CN110459221A (en) * 2019-08-27 2019-11-15 苏州思必驰信息科技有限公司 The method and apparatus of more equipment collaboration interactive voices
CN110534108A (en) * 2019-09-25 2019-12-03 北京猎户星空科技有限公司 A kind of voice interactive method and device

Also Published As

Publication number Publication date
CN111210819B (en) 2023-11-21

Similar Documents

Publication Publication Date Title
CN106488335B (en) Live-broadcast control method and device
US11856322B2 (en) Display apparatus for image processing and image processing method
CN110459221B (en) Method and device for multi-device cooperative voice interaction
EP4096222A1 (en) Live broadcast assistance method and electronic device
CN103634683A (en) Screen capturing method and device for intelligent televisions
CN111312240A (en) Data control method and device, electronic equipment and storage medium
CN109151366B (en) Sound processing method for video call, storage medium and server
JP2020141420A (en) Computing system with channel-change-based trigger feature
WO2020187183A1 (en) Program pushing and playing method, display device, mobile device, and system
CN113794928A (en) Audio playing method and display device
WO2020248627A1 (en) Video call method and display device
CA3102425C (en) Video processing method, device, terminal and storage medium
US11553019B2 (en) Method, apparatus, electronic device and storage medium for acquiring programs in live streaming room
WO2024001802A1 (en) Image processing method and apparatus, and electronic device and storage medium
CN108769799B (en) Information processing method and electronic equipment
WO2017113528A1 (en) Method, apparatus, device and system for pairing smart home appliance
CN111263223A (en) Media volume adjusting method and display device
CN111210819B (en) Information processing method and device and electronic equipment
CN113827953B (en) Game control system
CN112447174B (en) Service providing method, device and system, computing device and storage medium
CN111312248A (en) Interaction method, device, system and storage medium
CN106851380A (en) A kind of information processing method and device based on intelligent television
CN107589987A (en) Software control method, device and computer-readable recording medium
CN112055238B (en) Video playing control method, device and system
CN110602556A (en) Playing method, cloud server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant