CN112751971A - Voice playing method and device and electronic equipment - Google Patents

Voice playing method and device and electronic equipment Download PDF

Info

Publication number
CN112751971A
CN112751971A CN202011605567.3A CN202011605567A CN112751971A CN 112751971 A CN112751971 A CN 112751971A CN 202011605567 A CN202011605567 A CN 202011605567A CN 112751971 A CN112751971 A CN 112751971A
Authority
CN
China
Prior art keywords
application
voice data
data information
target
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011605567.3A
Other languages
Chinese (zh)
Inventor
张恒新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202011605567.3A priority Critical patent/CN112751971A/en
Publication of CN112751971A publication Critical patent/CN112751971A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application discloses a voice playing method and device and electronic equipment, and belongs to the technical field of communication. The method comprises the following steps: acquiring voice data information of a target application under the condition that at least two applications are in a call state; if the number of the target applications is one, playing voice data information of the target applications through a voice channel; and if the number of the target applications is at least two, playing the voice data information of the first application with the highest priority through a voice channel according to a preset playing priority, and storing the voice data information of other applications except the first application in the target applications. Under the scene of real-time communication concurrence, one path or multiple paths of voice data can be processed on the premise of ensuring the voice communication quality, and the effects of processing multiple paths of communication and ensuring the communication quality are achieved.

Description

Voice playing method and device and electronic equipment
Technical Field
The present application relates to the field of communications technologies, and in particular, to a voice playing method and apparatus, and an electronic device.
Background
With the continuous development of mobile communication technology, intelligent electronic devices (such as mobile phones) are becoming popular. Intelligent electronic devices are also increasingly being provided with more functionality. With the application of various voice interaction types to electronic equipment, the selectivity of voice communication is also increasing. It is inevitable that two or more applications will communicate, i.e. the real-time communication will be concurrent. In the prior art, either the concurrent situation of real-time communication cannot be processed, only the execution of one path of communication can be maintained, and the execution of the other paths of communication is disconnected; or even if the concurrent real-time communication can be processed, the voice superposition and the hearing problem can be caused.
Disclosure of Invention
The embodiment of the application aims to provide a voice playing method, a voice playing device and electronic equipment, and can solve the problem that the processing of multipath communication and communication quality cannot be considered under the current scene of real-time communication concurrence.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a voice playing method, where the method includes:
acquiring voice data information of a target application under the condition that at least two applications are in a call state;
if the number of the target applications is one, playing voice data information of the target applications through a voice channel;
or if the number of the target applications is at least two, playing the voice data information of the first application with the highest priority through a voice channel according to a preset playing priority, and storing the voice data information of other applications except the first application in the target applications.
In a second aspect, an embodiment of the present application further provides a voice playing apparatus, including:
the acquisition module is used for acquiring the voice data information of the target application under the condition that at least two applications are in a conversation state;
the first playing module is used for playing the voice data information of the target application through a voice channel under the condition that the number of the target applications is one;
and the second playing module is used for playing the voice data information of the first application with the highest priority through a voice channel according to a preset playing priority under the condition that the number of the target applications is at least two, and storing the voice data information of other applications except the first application in the target applications.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the voice playing method according to the first aspect.
In a fourth aspect, the present application further provides a readable storage medium, on which a program or instructions are stored, and when the program or instructions are executed by a processor, the steps of the voice playing method according to the first aspect are implemented.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the voice playing method according to the first aspect.
In the embodiment of the application, under the condition that at least two applications are in a call state, voice data information of a target application is acquired; if the number of the target applications is one, playing voice data information of the target applications through a voice channel; if the number of the target applications is at least two, the voice data information of the first application with the highest priority is played through the voice channel according to the preset playing priority, and the voice data information of other applications except the first application in the target applications is stored, so that one or more paths of voice data can be processed under the condition that the voice communication quality is guaranteed in the scene of real-time communication concurrency, and the effect of processing multi-path communication and guaranteeing the communication quality is achieved.
Drawings
Fig. 1 is a schematic flow chart of a voice playing method according to an embodiment of the present application;
fig. 2 is a second schematic flowchart of a voice playing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a voice playing apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes in detail a voice playing method provided in the embodiments of the present application with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Fig. 1 is a schematic flow chart of a voice playing method according to an embodiment of the present application. The method is applied to electronic equipment, and the implementation process of the method is specifically described below with reference to the figure.
Step 101, acquiring voice data information of a target application under the condition that at least two applications are in a call state;
it is understood that at least two applications are in a call state in a scenario where at least two real-time communications are concurrent, that is, there are at least two real-time communication events that are ongoing simultaneously.
For example, the electronic device needs to process two or more events related to voice communication in the same time period, specifically, when the user performs voice or video chat with another person through the application a, the user answers an incoming call from another person through the phone application at the same time.
It is noted that the target application is at least one of the at least two applications.
Here, when at least two applications are in a call state, the electronic device may be capable of acquiring voice data information of each of the at least two applications (when there is no voice data information of an application, the voice data information of the application is considered to be empty), and the voice data information of different applications may be acquired at different times or may be acquired at the same time.
Step 102, if the number of the target applications is one, playing voice data information of the target applications through a voice channel;
in this step, the scenario that one target application is corresponding to indicates that the currently acquired voice data information is only one application (i.e., the voice data information of the target application), and the voice channel of the electronic device is usually one, at this time, the voice data information of the target application is played through the voice channel, so that the voice communication quality can be ensured.
It should be noted that the voice channel is a voice channel communicating an earphone and a speaker of the electronic device.
Or, in step 103, if the number of the target applications is at least two, playing the voice data information of the first application with the highest priority through the voice channel according to a preset playing priority, and storing the voice data information of other applications except the first application in the target applications.
In this step, at least two, that is, multiple scenes corresponding to the target applications indicate that the currently acquired voice data information is voice data information of multiple applications, and a voice channel of the electronic device is usually one, at this time, the voice data information of multiple applications cannot be processed (played) at the same time. In this step, the voice data information of the first application with the highest playing priority is played through the voice channel by presetting the playing priority, and the voice data information of other applications except the first application in the target application is stored, so that the multiple paths of voice data information can be processed on the premise of ensuring the voice communication quality of the voice data information played through the voice channel, and the multiple paths of voice data information are not influenced with each other.
According to the voice playing method, under the condition that at least two applications are in a conversation state, voice data information of a target application is obtained; if the number of the target applications is one, playing voice data information of the target applications through a voice channel; if the number of the target applications is at least two, the voice data information of the first application with the highest priority is played through the voice channel according to the preset playing priority, and the voice data information of other applications except the first application in the target applications is stored, so that one or more paths of voice data can be processed under the condition that the voice communication quality is guaranteed in the scene of real-time communication concurrency, and the effect of processing multi-path communication and guaranteeing the communication quality is achieved.
It should be noted that, for the voice data information of different applications acquired at different times, it is only necessary to play the corresponding voice data information in sequence according to the time sequence.
The embodiment of the application mainly aims at the scene that voice data information of different applications is acquired at the same time or overlapped time exists in the process of acquiring the voice data information of different applications.
In an optional implementation manner, the number of the target applications is two, the voice channel includes a left channel and a right channel of the first device, that is, the electronic device is an earphone device or a sound device, or the electronic device is connected with an earphone or a sound device, the voice data information of one of the two applications is played through the left channel, and the voice data information of the other application is played through the right channel.
It should be noted that, because the data of the left and right channels of the earphone device or the audio device are stored in two different storage locations, the voice data information of the two applications are respectively copied to the two different storage locations, and thus, two paths of voice data can be played simultaneously.
As an optional implementation manner, the method in the embodiment of the present application further includes:
under the condition that first voice data information is acquired through a microphone, determining a second application corresponding to the first voice data information;
it is noted that the second application is one of the at least two applications.
It can be understood that, since the present embodiment corresponds to a scenario in which at least two applications are in a call state, when the first voice data information is acquired through the microphone, it is described that the first voice data information needs to be sent to the electronic device corresponding to the call object in the application. Therefore, it is most critical to determine to which application the first voice data information corresponds.
In an optional implementation manner, the determining, in this step, the second application corresponding to the first voice data information may specifically include:
performing semantic analysis on the first voice data information, and determining second voice data information related to the first voice data information, wherein the second voice data information is voice data information of the target application;
and determining the application corresponding to the second voice data information as the second application.
It should be noted that a call between users is an information interaction process, and information generated in a conversation between your own and me is often related, for example, the same topic is involved in the conversation process, and the same topic is often related to the same keyword, or personal information of a call object, such as a name, etc. Therefore, in the present implementation, by performing semantic analysis on the first voice data information, the second voice data information related to the first voice data information can be obtained from the previously acquired voice data information of the target application (naturally, the acquired voice data information of the target application is also subjected to semantic analysis); and determining the application corresponding to the second voice data information as the second application. That is, the user wants to send the first voice data information to the electronic device corresponding to the call partner in the second application with which the user has a call.
Illustratively, in the case of simultaneously receiving the voice information of "when to go to shopping mall" of the a application and the voice information of "XX phone is what" of the B application, if the first voice data information of the user is "go to shopping mall in the afternoon of today", the voice information of the a application is recognized as the second voice data information associated therewith, and further, the a application is determined as the second application, and the first voice data information of "go to shopping mall in the afternoon of today" is transmitted through the a application.
As another optional implementation, the screen displays the identifications of the at least two applications; correspondingly, the determining of the second application corresponding to the first voice data information in this step may specifically include:
under the condition that the highlight display of the target identification is detected, determining the application corresponding to the target identification as the second application;
wherein the target identity is one of the identities of the at least two applications.
In this implementation, the detection of the highlighting of the target identifier and the acquisition of the first voice data information by the microphone may occur simultaneously; or detecting that the target identifier is highlighted, and acquiring first voice data information through the wind; or the first voice data information is acquired through the wind and temporarily stored, and then the target identifier is detected to be highlighted.
The screen particularly refers to a screen of the electronic device. Here, the identification of the application is used to identify the application, helping the user to distinguish between different applications.
Optionally, the identifier of the application may be an application icon, a floating window with a preset shape, or an application call interaction interface, and the identifier of the application may be various, which is only an example and is not specifically limited herein.
And in the case that the highlight of the target identifier is detected, indicating that the user of the electronic equipment is currently or wants to carry out voice communication with the call object of the application corresponding to the target identifier. Therefore, the highlighted target identifies the corresponding application as the second application.
The highlighting of the target identifier may be highlighting or color displaying of the identifier, and any highlighting manner that can distinguish the target identifier from other identifiers is not specifically limited herein.
And sending the first voice data information to a first target electronic device, wherein the first target electronic device is an electronic device corresponding to a call object in the second application.
It should be noted that, the identifier of the application highlighted in the implementation manner also plays a role of intuitively prompting the user, that is, prompting the application to which the call object currently making a call with the user is the call object. Moreover, when a user wants to communicate with a call object in which application (or which call object of the same application), the user can control the highlighted display of the identifier of the corresponding application, and then directly output voice data information, i.e., directly speak, so that complex voice analysis processing can be omitted, and the power consumption of the device can be reduced.
In an example, the identifiers of at least two applications are application call interaction interfaces corresponding to the applications, when the highlight display of the target application call interaction interface is detected, the application corresponding to the target application call interaction interface is determined as a second application, and the first voice data acquired through the microphone is sent to the electronic device corresponding to the call object in the second application.
Here, the call target specifically refers to a target of a call with a user of the electronic device.
Optionally, at least two application communication interactive interfaces are displayed on the screen of the electronic device in a split screen manner, and the target application communication interactive interface can be highlighted through user input.
In this example, the highlighting of the target application communication interaction interface corresponding to the first voice data information is specifically a display mode that can highlight the target application communication interaction interface in at least two application communication interaction interfaces. For example, the highlighting is highlighting, that is, the brightness of the target application call interactive interface is higher than that of the other application call interactive interfaces. Of course, the highlighting may also be that the display color of the target application call interaction interface is different from the display color of the other application call interaction interfaces, and is not limited herein.
In another example, in the case that the at least two applications are identified as the respective corresponding application call interactive interfaces, the at least two application call interactive interfaces are displayed on the screen of the electronic device in a split screen manner. First, receiving a user input (e.g., a click input) to a first application call interactive interface (the first application call interactive interface being one of at least two application call interactive interfaces); responding to the user input, when first voice data information is acquired through a microphone, determining that a second application corresponding to the first voice data information is an application corresponding to the first application call interactive interface, and sending the first voice data information to electronic equipment corresponding to a call object in the second application.
Here, under the condition that the voice data information is received after the user answers the incoming call, the corresponding application communication interactive interface is a telephone answering interface; when the voice data information is received by the user through a certain social application program, the corresponding application call interaction interface is an interaction interface of the social application program, such as an audio interaction interface or a video interaction interface.
In this example, the user may select to switch autonomously and manually, that is, input to the application call interaction interface, and want to send the voice data information to which call partner, that is, may switch manually to the application call interaction interface corresponding to the call partner, so as to implement sending of the voice data information.
Because the voice channel is limited, in order to avoid a voice call between the opposite-end electronic device terminal and the home-end electronic device corresponding to the voice data information that is not played, further, the method of the embodiment of the present application may further include:
sending prompt information to a second target electronic device under the condition that voice data information of a first application with the highest priority is played through a voice channel, wherein the prompt information is used for prompting the electronic device to keep call connection;
the second target electronic device is an electronic device corresponding to a call object in other applications except the first application in the target application.
Here, the prompt information may be a text prompt or a voice prompt, and is not limited specifically here.
In an example, when the application M and the application N are both in a call state, that is, when the user has a call with both call objects in the two applications, where the playing priority of the application M is higher than that of the application N, after the voice data information passing through the application M is played through the voice channel and the voice data information of the application N is temporarily stored, a prompt message is sent to the electronic device corresponding to the call object in the application N to prompt the other party not to hang up, and the other party replies to the other party for a while.
As an optional implementation manner, after playing, according to a preset playing priority, voice data information of a first application with a highest priority through a voice channel, and storing voice data information of other applications except the first application in the target application, the method of the embodiment of the present application further includes:
after a preset time length after the voice data information of the first application is played, if third voice data information is not acquired, playing voice data information of a third application in the target application;
the third application is different from the first application, and the playing priority of the third application is lower than that of the first application.
Here, optionally, the third voice data information is voice data information from the first application. Because the playing priority of the first application is highest, after the playing of the voice data information of the first application is completed, if new voice data information, namely third voice data information is not obtained, the voice data information of other applications is sequentially played according to the preset playing priority.
Specifically, according to the sequence of preset playing priority from high to low, the voice data information of the application with the highest priority is played in real time through a voice channel, and the voice data information of the rest other applications is stored; and after the voice data information with the highest priority is played, playing the voice data information with the highest priority in the voice data information of the other rest applications through the voice channel, and so on, and playing all the voice data information through the voice channel.
In one example, user a has a voice call with user B through a first application of the electronic device and has a voice call with user C through a second application of the electronic device, and it is detected that two simultaneous users are speaking, i.e. have downstream voice data at the same time. And if the condition is set that the playing priority corresponding to the first application is higher than the playing priority corresponding to the second application, preferentially playing the voice data information of the user B passing through the first application through the voice channel, and simultaneously storing the voice data information of the user C passing through the second application for the moment of not playing. And after the voice data information of the user B of the first application is played, the voice data information of the user C of the second application is played through the voice channel.
The following describes an implementation process of the method according to the embodiment of the present application, as an example, as shown in fig. 2.
S1: the first terminal is in video call with the second terminal through a first application;
s2: the first terminal receives an incoming call request from the third terminal;
here, an incoming call request from a third terminal is received during a video call of the first terminal with the second terminal through the first application.
S3: judging whether the first terminal answers the incoming call of the third terminal;
if yes, go to step S4, at this time, a real-time communication concurrent event occurs; otherwise, step S1 is executed.
S4: the first terminal displays a video call interface with the second terminal and a call answering interface with the third terminal in a split screen manner;
s5: judging whether the first terminal is in an earphone mode;
if yes, go to step S6; otherwise, go to step S8;
s6: the method comprises the steps that downlink voice data of a video call and downlink voice data of an incoming call are respectively played through a left channel and a right channel of the earphone device;
s7: judging whether the first terminal has call interruption or not;
if yes, ending the process; otherwise, returning to execute the step S6;
s8: a first terminal collects downlink voice data;
here, for the non-earphone mode, i.e. the earphone or the speaker, since there is only one output device, a trade-off needs to be made, and at this time, the data from the first application and the data from the call with the third terminal can be subjected to voice analysis through the first terminal, so as to detect whether there is a human voice in real time.
S9: judging whether downlink voice data exist in the video call scene and the incoming call scene through voice analysis;
if yes, go to step S10; otherwise, go to step S14;
s10: according to the preset priority, playing the voice data with high priority in the two, and caching the voice data with low priority;
s11: judging whether blank time exists after the voice data with high priority is played;
if yes, go to step S12; otherwise, go to step S10;
s12: playing the cached voice data with low priority;
s13: judging whether the first terminal has call interruption or not;
if so, the process ends, otherwise, the process returns to step S8.
S14: playing the existing downlink voice data;
if the downlink voice data exists in the video call scene, the voice data is played; and if the downlink voice data exists in the incoming call scene, playing the voice data.
According to the voice playing method, under the condition that at least two applications are in a conversation state, voice data information of a target application is obtained; if the number of the target applications is one, playing voice data information of the target applications through a voice channel; if the number of the target applications is at least two, the voice data information of the first application with the highest priority is played through the voice channel according to the preset playing priority, and the voice data information of other applications except the first application in the target applications is stored, so that one path or multiple paths of voice data can be processed under the condition that the voice communication quality is guaranteed in the scene of real-time communication concurrency, and the effects of processing multiple paths of communication and guaranteeing the communication quality are achieved.
It should be noted that, in the voice playing method provided in the embodiment of the present application, the execution main body may be a voice playing device, or a control module used for executing the voice playing method in the voice playing device. The embodiment of the present application takes the voice playing device as an example to execute the voice playing method, and describes the voice playing device provided in the embodiment of the present application.
Fig. 3 is a schematic structural diagram of a voice playing apparatus according to an embodiment of the present application. The voice playing apparatus 300 may include:
an obtaining module 301, configured to obtain voice data information of a target application when at least two applications are in a call state;
a first playing module 302, configured to play, through a voice channel, voice data information of the target application when the number of the target applications is one;
a second playing module 303, configured to play, according to a preset playing priority, voice data information of a first application with a highest priority through a voice channel when the number of the target applications is at least two, and store voice data information of other applications except the first application in the target applications.
Optionally, the apparatus 300 further comprises:
the first processing module is used for determining a second application corresponding to first voice data information under the condition that the first voice data information is acquired through a microphone;
and the first sending module is used for sending the first voice data information to a first target electronic device, wherein the first target electronic device is an electronic device corresponding to a call object in the second application.
Optionally, the first processing module includes:
a semantic analysis unit, configured to perform semantic analysis on the first voice data information, and determine second voice data information related to the first voice data information, where the second voice data information is voice data information of the target application;
and the first processing unit is used for determining the application corresponding to the second voice data information as the second application.
Optionally, the apparatus 300 further comprises:
the second sending module is used for sending prompt information to the second target electronic equipment under the condition that the voice data information of the first application with the highest priority is played through the voice channel, wherein the prompt information is used for prompting the electronic equipment to keep call connection;
the second target electronic device is an electronic device corresponding to a call object in other applications except the first application in the target application.
Optionally, the apparatus 300 further comprises:
the third playing module is used for playing the voice data information of a third application in the target application after a preset time length after the voice data information of the first application is played and under the condition that the third voice data information is not acquired;
the third application is different from the first application, and the playing priority of the third application is lower than that of the first application.
The voice playing device in the embodiment of the present application may be a device, or may also be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a Network Attached Storage (NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The voice playing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The voice playing device provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 2, and is not described herein again to avoid repetition.
The voice playing device of the embodiment of the application acquires the voice data information of the target application under the condition that at least two applications are in a conversation state; if the number of the target applications is one, playing voice data information of the target applications through a voice channel; if the number of the target applications is at least two, the voice data information of the first application with the highest priority is played through the voice channel according to the preset playing priority, and the voice data information of other applications except the first application in the target applications is stored, so that one path or multiple paths of voice data can be processed under the condition that the voice communication quality is guaranteed in the scene of real-time communication concurrency, and the effects of processing multiple paths of communication and guaranteeing the communication quality are achieved.
Optionally, as shown in fig. 4, an electronic device 400 is further provided in this embodiment of the present application, and includes a processor 401, a memory 402, and a program or an instruction stored in the memory 402 and executable on the processor 401, where the program or the instruction is executed by the processor 401 to implement each process of the foregoing voice playing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present application.
The electronic device 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511.
Those skilled in the art will appreciate that the electronic device 500 may further include a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 510 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 5 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 510 is configured to, when at least two applications are in a call state, obtain voice data information of a target application; if the number of the target applications is one, playing voice data information of the target applications through a voice channel; or if the number of the target applications is at least two, playing the voice data information of the first application with the highest priority through a voice channel according to a preset playing priority, and storing the voice data information of other applications except the first application in the target applications.
In the embodiment of the application, under the scene of real-time communication concurrence, one path or multiple paths of voice data can be processed on the premise of ensuring the voice communication quality, so that the effects of processing multiple paths of communication and ensuring the communication quality are achieved.
Optionally, the processor 510 is further configured to:
under the condition that first voice data information is acquired through a microphone, determining a second application corresponding to the first voice data information;
the radio frequency unit 501 is configured to send the first voice data information to a first target electronic device, where the first target electronic device is an electronic device corresponding to a call target in the second application.
Optionally, the processor 510 is further configured to:
performing semantic analysis on the first voice data information, and determining second voice data information related to the first voice data information, wherein the second voice data information is voice data information of the target application;
and determining the application corresponding to the second voice data information as the second application.
Optionally, the radio frequency unit 501 is further configured to:
sending prompt information to a second target electronic device under the condition that voice data information of a first application with the highest priority is played through a voice channel, wherein the prompt information is used for prompting the electronic device to keep call connection;
the second target electronic device is an electronic device corresponding to a call object in other applications except the first application in the target application.
Optionally, the processor 510 is further configured to:
after a preset duration after the voice data information of the first application is played, if third voice data information is not obtained, playing voice data information of a third application in the target application through an audio output unit 503;
the third application is different from the first application, and the playing priority of the third application is lower than that of the first application.
In the embodiment of the application, under the scene of real-time communication concurrence, one path or multiple paths of voice data can be processed on the premise of ensuring the voice communication quality, so that the effects of processing multiple paths of communication and ensuring the communication quality are achieved.
It should be understood that in the embodiment of the present application, the input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 506 may include a display panel 5061, and the display panel 5061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 507 includes a touch panel 5051 and other input devices 5072. The touch panel 5051 is also referred to as a touch panel. The touch panel 5051 may include two portions of a touch detection device and a touch controller. Other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in further detail herein. The memory 509 may be used to store software programs as well as various data including, but not limited to, application programs and operating systems. Processor 510 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the foregoing voice playing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the foregoing voice playing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A method for playing speech, comprising:
acquiring voice data information of a target application under the condition that at least two applications are in a call state;
if the number of the target applications is one, playing voice data information of the target applications through a voice channel;
or if the number of the target applications is at least two, playing the voice data information of the first application with the highest priority through a voice channel according to a preset playing priority, and storing the voice data information of other applications except the first application in the target applications.
2. The method of claim 1, further comprising:
under the condition that first voice data information is acquired through a microphone, determining a second application corresponding to the first voice data information;
and sending the first voice data information to a first target electronic device, wherein the first target electronic device is an electronic device corresponding to a call object in the second application.
3. The method of claim 2, wherein the determining the second application corresponding to the first voice data information comprises:
performing semantic analysis on the first voice data information, and determining second voice data information related to the first voice data information, wherein the second voice data information is voice data information of the target application;
and determining the application corresponding to the second voice data information as the second application.
4. The method of claim 1, further comprising:
sending prompt information to a second target electronic device under the condition that voice data information of a first application with the highest priority is played through a voice channel, wherein the prompt information is used for prompting the electronic device to keep call connection;
the second target electronic device is an electronic device corresponding to a call object in other applications except the first application in the target application.
5. The method of claim 1, wherein after playing the voice data information of the first application with the highest priority through the voice channel according to a preset playing priority and storing the voice data information of the other applications except the first application in the target application, the method further comprises:
after a preset time length after the voice data information of the first application is played, if third voice data information is not acquired, playing voice data information of a third application in the target application;
the third application is different from the first application, and the playing priority of the third application is lower than that of the first application.
6. A voice playback apparatus, comprising:
the acquisition module is used for acquiring the voice data information of the target application under the condition that at least two applications are in a conversation state;
the first playing module is used for playing the voice data information of the target application through a voice channel under the condition that the number of the target applications is one;
and the second playing module is used for playing the voice data information of the first application with the highest priority through a voice channel according to a preset playing priority under the condition that the number of the target applications is at least two, and storing the voice data information of other applications except the first application in the target applications.
7. The apparatus of claim 6, further comprising:
the first processing module is used for determining a second application corresponding to first voice data information under the condition that the first voice data information is acquired through a microphone;
and the first sending module is used for sending the first voice data information to a first target electronic device, wherein the first target electronic device is an electronic device corresponding to a call object in the second application.
8. The apparatus of claim 7, wherein the first processing module comprises:
a semantic analysis unit, configured to perform semantic analysis on the first voice data information, and determine second voice data information related to the first voice data information, where the second voice data information is voice data information of the target application;
and the first processing unit is used for determining the application corresponding to the second voice data information as the second application.
9. The apparatus of claim 7, further comprising:
the second sending module is used for sending prompt information to the second target electronic equipment under the condition that the voice data information of the first application with the highest priority is played through the voice channel, wherein the prompt information is used for prompting the electronic equipment to keep call connection;
the second target electronic device is an electronic device corresponding to a call object in other applications except the first application in the target application.
10. The apparatus of claim 7, further comprising:
the third playing module is used for playing the voice data information of a third application in the target application after a preset time length after the voice data information of the first application is played and under the condition that the third voice data information is not acquired;
the third application is different from the first application, and the playing priority of the third application is lower than that of the first application.
11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the voice playback method as claimed in any one of claims 1 to 5.
12. A readable storage medium, characterized in that the readable storage medium has stored thereon a program or instructions which, when executed by a processor, implement the steps of the voice playing method according to any one of claims 1 to 5.
CN202011605567.3A 2020-12-30 2020-12-30 Voice playing method and device and electronic equipment Pending CN112751971A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011605567.3A CN112751971A (en) 2020-12-30 2020-12-30 Voice playing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011605567.3A CN112751971A (en) 2020-12-30 2020-12-30 Voice playing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112751971A true CN112751971A (en) 2021-05-04

Family

ID=75647267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011605567.3A Pending CN112751971A (en) 2020-12-30 2020-12-30 Voice playing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112751971A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113329203A (en) * 2021-05-31 2021-08-31 维沃移动通信(杭州)有限公司 Call control method, call control device, electronic device and readable storage medium
CN113934397A (en) * 2021-10-15 2022-01-14 深圳市一诺成电子有限公司 Broadcast control method in electronic equipment and electronic equipment
CN114025230A (en) * 2021-11-09 2022-02-08 湖南快乐阳光互动娱乐传媒有限公司 Terminal video playing method and related device
CN114710588A (en) * 2022-03-28 2022-07-05 重庆长安汽车股份有限公司 Vehicle-mounted telephone conflict control system, method, electronic device and storage medium
CN114938363A (en) * 2022-04-22 2022-08-23 厦门紫光展锐科技有限公司 Voice data transmission device and method
CN116935846A (en) * 2023-06-29 2023-10-24 珠海谷田科技有限公司 Offline conference light control method, device, equipment and storage medium
CN117135266A (en) * 2023-10-25 2023-11-28 Tcl通讯科技(成都)有限公司 Information processing method, device and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050220069A1 (en) * 2004-04-01 2005-10-06 Nortel Networks Limited Method for providing bearer specific information for wireless networks
CN101626560A (en) * 2009-07-31 2010-01-13 中兴通讯股份有限公司 Method and cell phone device for supporting concurrence of voice calls of circuit domain and PS domain
CN101800950A (en) * 2009-12-29 2010-08-11 宇龙计算机通信科技(深圳)有限公司 Method, system and mobile terminal for realizing voice mail in conversation process
CN107872568A (en) * 2017-09-27 2018-04-03 努比亚技术有限公司 A kind of talking management method, mobile terminal and computer-readable recording medium
CN109639738A (en) * 2019-01-30 2019-04-16 维沃移动通信有限公司 The method and terminal device of voice data transmission
CN111935801A (en) * 2020-07-16 2020-11-13 中国联合网络通信集团有限公司 Voice access method, system, terminal device and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050220069A1 (en) * 2004-04-01 2005-10-06 Nortel Networks Limited Method for providing bearer specific information for wireless networks
CN101626560A (en) * 2009-07-31 2010-01-13 中兴通讯股份有限公司 Method and cell phone device for supporting concurrence of voice calls of circuit domain and PS domain
CN101800950A (en) * 2009-12-29 2010-08-11 宇龙计算机通信科技(深圳)有限公司 Method, system and mobile terminal for realizing voice mail in conversation process
CN107872568A (en) * 2017-09-27 2018-04-03 努比亚技术有限公司 A kind of talking management method, mobile terminal and computer-readable recording medium
CN109639738A (en) * 2019-01-30 2019-04-16 维沃移动通信有限公司 The method and terminal device of voice data transmission
CN111935801A (en) * 2020-07-16 2020-11-13 中国联合网络通信集团有限公司 Voice access method, system, terminal device and computer readable storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113329203A (en) * 2021-05-31 2021-08-31 维沃移动通信(杭州)有限公司 Call control method, call control device, electronic device and readable storage medium
CN113934397A (en) * 2021-10-15 2022-01-14 深圳市一诺成电子有限公司 Broadcast control method in electronic equipment and electronic equipment
CN114025230A (en) * 2021-11-09 2022-02-08 湖南快乐阳光互动娱乐传媒有限公司 Terminal video playing method and related device
CN114710588A (en) * 2022-03-28 2022-07-05 重庆长安汽车股份有限公司 Vehicle-mounted telephone conflict control system, method, electronic device and storage medium
CN114938363A (en) * 2022-04-22 2022-08-23 厦门紫光展锐科技有限公司 Voice data transmission device and method
CN114938363B (en) * 2022-04-22 2023-10-13 厦门紫光展锐科技有限公司 Voice data transmission device and method
CN116935846A (en) * 2023-06-29 2023-10-24 珠海谷田科技有限公司 Offline conference light control method, device, equipment and storage medium
CN116935846B (en) * 2023-06-29 2024-03-19 珠海谷田科技有限公司 Offline conference light control method, device, equipment and storage medium
CN117135266A (en) * 2023-10-25 2023-11-28 Tcl通讯科技(成都)有限公司 Information processing method, device and computer readable storage medium
CN117135266B (en) * 2023-10-25 2024-03-22 Tcl通讯科技(成都)有限公司 Information processing method, device and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN112751971A (en) Voice playing method and device and electronic equipment
CN111629409B (en) Call control method and device and electronic equipment
CN113360238A (en) Message processing method and device, electronic equipment and storage medium
EP4184506A1 (en) Audio processing
CN111884908B (en) Contact person identification display method and device and electronic equipment
CN111666009B (en) Interface display method and electronic equipment
CN113382270B (en) Virtual resource processing method and device, electronic equipment and storage medium
CN112711366A (en) Image generation method and device and electronic equipment
CN112764710A (en) Audio playing mode switching method and device, electronic equipment and storage medium
CN112099702A (en) Application running method and device and electronic equipment
CN111752448A (en) Information display method and device and electronic equipment
CN113271376A (en) Communication control method, electronic equipment and earphone
CN112702468A (en) Call control method and device
US20170201479A1 (en) Group message display method, device and medium
CN105227891A (en) A kind of video call method and device
CN111556271B (en) Video call method, video call device and electronic equipment
CN113992786A (en) Audio playing method and device
CN115103231A (en) Video call method and device, first electronic equipment and second electronic equipment
US8638766B2 (en) Electronic device and method of controlling the same
CN112134997B (en) Audio channel state control method and device, electronic equipment and readable storage medium
EP3001660B1 (en) Method, device and system for telephone interaction
CN113329203A (en) Call control method, call control device, electronic device and readable storage medium
CN113692067A (en) Device connection method, device and readable storage medium
CN112383666A (en) Content sending method and device and electronic equipment
CN113518143A (en) Interface input source switching method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210504

RJ01 Rejection of invention patent application after publication