CN115955455A - Voice message processing method and device and electronic equipment - Google Patents

Voice message processing method and device and electronic equipment Download PDF

Info

Publication number
CN115955455A
CN115955455A CN202211280732.1A CN202211280732A CN115955455A CN 115955455 A CN115955455 A CN 115955455A CN 202211280732 A CN202211280732 A CN 202211280732A CN 115955455 A CN115955455 A CN 115955455A
Authority
CN
China
Prior art keywords
window
voice message
input
user
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211280732.1A
Other languages
Chinese (zh)
Inventor
付维浪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211280732.1A priority Critical patent/CN115955455A/en
Publication of CN115955455A publication Critical patent/CN115955455A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a voice message processing method and device and electronic equipment, and belongs to the technical field of communication. The method comprises the following steps: under the condition that a first window and at least one second window are displayed, receiving a first input of a user to a voice recording control in the first window; responding to the first input, and recording to obtain a first voice message; receiving a second input of the user; in response to a second input, performing a first operation; wherein the first operation comprises any one of: splicing the first voice message with first audio information in a first multimedia file to obtain a second voice message, wherein the first multimedia file is a multimedia file played by a first target window in at least one second window; sending a first voice message through a first window, and recording a third voice message corresponding to a second target window in at least one second window; and sending the first voice message through the first window and a third target window in the at least one second window.

Description

Voice message processing method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a voice message processing method and device and electronic equipment.
Background
With the development of device technologies, electronic devices are becoming more and more rich in functions, for example, the electronic devices can send or receive voice messages.
Taking the example that the electronic device sends the voice message, when the user needs to send the voice message to the contact A, the user can trigger the electronic device to display the session interface of the contact A, and long press the voice recording control in the session interface and speak the voice content to be sent, so as to trigger the electronic device to record the voice content; then, after the user's finger leaves the voice recording control, the electronic device may generate a voice message based on the recorded voice content, and send the voice message to contact a.
Disclosure of Invention
The embodiment of the application aims to provide a voice message processing method, a voice message processing device and electronic equipment, which can improve the flexibility of sending a voice message by the electronic equipment.
In a first aspect, an embodiment of the present application provides a method for processing a voice message, where the method includes: under the condition that a first window and at least one second window are displayed, receiving a first input of a user to a voice recording control in the first window; responding to the first input, and recording to obtain a first voice message; receiving a second input of the user; in response to the second input, performing a first operation; wherein the first operation comprises any one of: splicing the first voice message with first audio information in a first multimedia file to obtain a second voice message, wherein the first multimedia file is a multimedia file played by a first target window in at least one second window; sending a first voice message through a first window, and recording a third voice message corresponding to a second target window in at least one second window; and sending the first voice message through the first window and a third target window in the at least one second window.
In a second aspect, an embodiment of the present application provides a voice message processing apparatus, including: the device comprises a display module, a receiving module and a control module; the receiving module is used for receiving a first input of a user to a voice recording control in a first window under the condition that the display module displays the first window and at least one second window; the control module is used for responding to the first input received by the receiving module and recording to obtain a first voice message; the receiving module is further used for receiving a second input of the user; the control module is further used for responding to the second input and executing a first operation; wherein the first operation comprises any one of: splicing the first voice message with first audio information in a first multimedia file to obtain a second voice message, wherein the first multimedia file is a multimedia file played by a first target window in the at least one second window; sending the first voice message through the first window, and recording a third voice message corresponding to a second target window in the at least one second window; and sending the first voice message through a third target window in the first window and the at least one second window.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, stored on a storage medium, for execution by at least one processor to implement the method according to the first aspect.
In the embodiment of the application, a first input of a user to a voice recording control in a first window can be received under the condition that the first window and at least one second window are displayed; responding to the first input, and recording to obtain a first voice message; and receiving a second input of the user; and in response to a second input, performing a first operation; wherein the first operation comprises any one of: splicing the first voice message with first audio information in a first multimedia file to obtain a second voice message, wherein the first multimedia file is a multimedia file played by a first target window in at least one second window; sending a first voice message through a first window, and recording a third voice message corresponding to a second target window in at least one second window; and sending the first voice message through the first window and a third target window in the at least one second window. According to the scheme, under the condition that the first window and at least one second window are displayed, a user can record a first voice message through one input of the first window, then trigger splicing of the first voice message and the specific voice message of the media file played in the simultaneously displayed window through the other input so as to obtain a second voice message, or send the first voice message through the first window and the second target window, or continue to record the voice message through the third target window while sending the first voice message through the first window, so that the flexibility of processing the voice message can be improved.
Drawings
Fig. 1 is a schematic flowchart of a voice message processing method according to an embodiment of the present application;
fig. 2 is one of interface diagrams of a voice message processing method according to an embodiment of the present application;
fig. 3 is a second schematic interface diagram of a voice message processing method according to an embodiment of the present application;
fig. 4 is a third schematic interface diagram of a voice message processing method according to an embodiment of the present application;
fig. 5 is a fourth schematic interface diagram of a voice message processing method according to an embodiment of the present application;
fig. 6 is a fifth schematic interface diagram of a voice message processing method according to an embodiment of the present application;
fig. 7 is a sixth schematic interface diagram of a voice message processing method according to an embodiment of the present application;
fig. 8 is a seventh schematic interface diagram of a voice message processing method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a voice message processing apparatus according to an embodiment of the present application;
FIG. 10 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 11 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The following describes in detail a voice message processing method, an apparatus, an electronic device, and a readable storage medium according to embodiments of the present application with reference to the accompanying drawings.
Fig. 1 shows a possible flow diagram of the voice message processing method provided in the embodiment of the present application, and as shown in fig. 1, the voice message processing method provided in the embodiment of the present application may include steps 101 to 104 described below. The electronic device is used to perform the method as an example.
Step 101, the electronic device receives a first input of a user to a voice recording control in a first window under the condition that the first window and at least one second window are displayed.
In this embodiment, the first window may be a session interface. It is understood that the voice recording control is the control used for voice recording through the session interface.
Optionally, the first input may be any one of: the user's long press input to the voice recording control, the voice instruction input by the user, or the specific gesture input by the user may be specifically determined according to the actual use requirement, which is not limited in the embodiment of the present application.
Optionally, the specific gesture in the embodiment of the present application may be any one of a single-click gesture, a sliding gesture, a dragging gesture, a pressure recognition gesture, a long-press gesture, an area change gesture, a double-press gesture, and a double-click gesture; the click input in the embodiment of the application can be click input, double-click input, click input of any number of times and the like, and can also be long-time press input or short-time press input.
Optionally, the first window may include a conversation interface of the user with the group, or may include a conversation interface of the user with the contact.
Optionally, the first window and the at least one second window each include interfaces of different applications therein.
For example, the first window includes a session interface of the session application program 1, and the at least one second window includes a video playing interface in the video application program 1 and a session interface in the session application program 2, respectively.
Optionally, the electronic device may display the first window and the at least one second window in a split-screen manner; alternatively, the electronic device may display at least one second window on the first window in a picture-in-picture format; or the electronic device may display the first window and the remaining second windows in a picture-in-picture form on one second window, which may be determined according to actual usage requirements, and the embodiment of the present application is not limited.
Exemplarily, as shown in fig. 2, the electronic device may display 3 windows, namely window 1, window 2 and window 3 in a split screen manner, and display a session interface in a social Application (App) 1 in window 1, a session interface in social App2 in window 2, and a video playing interface in a video Application in window 3. Wherein, the first window may be window 1 or window 2.
Step 102, the electronic device records a first voice message in response to the first input.
In the embodiment of the application, the first voice message is obtained by recording the electronic equipment through a session interface in the first window.
In this embodiment of the present application, assuming that the first window includes a target session interface in a target session application, then: the "recording of the first voice message" may be understood as: a first voice message associated with a target conversation interface is recorded by a target conversation application. Specifically, after the recording of the first voice message is completed, the electronic device may directly transmit the first voice message through the first window. Further, "sending the first voice message through the first window" may be understood as: and sending a first voice message to a contact object corresponding to the target session interface.
And 103, receiving a second input of the user by the electronic equipment.
And step 104, the electronic equipment responds to the second input and executes the first operation.
Wherein the first operation may include any one of the following operations 1 to 3.
Operation 1, the first voice message is spliced with the first audio information in the first multimedia file to obtain a second voice message, and the first multimedia file is a multimedia file played by a first target window in at least one second window.
And operation 2, sending the first voice message through the first window, and recording a third voice message corresponding to a second target window in at least one second window.
And operation 3, sending the first voice message through the first window and a third target window of the at least one second window.
In the voice message processing method provided in the embodiment of the present application, when the first window and the at least one second window are displayed, and when the first window and the at least one second window are displayed, a user may record the first voice message through one input of the first window, and then trigger splicing the first voice message with the specific voice message of the media file played in the window that is displayed at the same time through another input, so as to obtain the second voice message, or send the first voice message through the first window and the second target window, or continue to record the voice message through the third target window while sending the first voice message through the first window, so that flexibility of processing the voice message may be improved.
Optionally, the electronic device can splice the first audio information after or before the message content of the first voice message to obtain the second voice message.
Alternatively, the first multimedia file may be an audio file, a video file, or any other multimedia file associated with voice content. For example, the first multimedia file may be a playing interface of a movie, or may be a playing interface of a piece of music, or may be a broadcasting interface.
The following describes a voice message processing method provided in an embodiment of the present application in conjunction with operation 1, operation 2, and operation 3.
Operation 1:
operation 1 may also be referred to as a multi-window sound collection scheme.
Alternatively, in a case where the first operation includes operation 1, the second input may include a first sub input and a second sub input. The step 103 may be realized by the following steps 103a and 103b, and the step 104 may be realized by the following steps 104a and 104 b.
Step 103a, the electronic device receives a first sub-input of the user.
And 104a, the electronic equipment responds to the first sub-input and starts the audio recording function.
Alternatively, the first sub-input may be consecutive in time to the first input.
Optionally, the first sub-input may specifically be a sliding input that slides from a voice recording control displayed in the first window to the first target window.
For example, after the user believes that the recording of the first voice message is complete, the finger does not release the voice recording control and slides into the first target window. Therefore, the electronic equipment can confirm that the user needs to splice the audio information in the first multimedia file with the recorded first voice message, so that the audio recording function of recording the audio information in the first multimedia file can be started.
Optionally, in order to prompt the user that the electronic device has turned on the audio recording function, after the electronic device receives the first sub-input, a message identifier of the first voice message may be displayed on the first target window.
In step 103b, the electronic device receives a second sub-input of the user.
And step 104b, the electronic equipment responds to the second sub-input, acquires first audio information corresponding to the second sub-input in the first multimedia file, and splices the first voice message and the first audio information to obtain a second voice message.
Optionally, the second sub-input may be an input that the user triggers to play the first audio information in the first multimedia file, or may be an input that the play progress identifier in the first target window is dragged from the corresponding start play position of the first audio information to the corresponding play end position of the first audio information.
Optionally, before executing the first input, the user may first trigger the electronic device to stop playing the first multimedia file by inputting the play control in the first target window; and dragging the playing progress mark in the first target window to the initial playing position corresponding to the first audio information. Therefore, after the user triggers the electronic device to start the audio recording function through the first sub-input, the playing progress mark can be directly dragged from the initial playing position to the playing ending position corresponding to the first audio information.
Therefore, the currently recorded voice message can be spliced with the voice message in the currently played media file, and compared with a scheme of only recording the voice message in the related art, the voice message processing method provided by the embodiment of the application can improve the flexibility and diversity of the obtained voice message.
Optionally, after the electronic device obtains the second voice message, the electronic device may send the second voice message through the first window. Therefore, compared with the scheme that only recorded voice messages can be sent in the related art, the voice message processing method provided by the embodiment of the application can improve the flexibility and diversity of sending the voice messages.
Optionally, in the case that the first operation includes operation 1, after the step 104, the voice message processing method provided in the embodiment of the present application may further include the following steps 105 and 106.
And 105, the electronic equipment receives a third input of the message identifier corresponding to the second voice message from the user.
And the third input is the input of dragging the message identifier corresponding to the second voice message to the area corresponding to the fourth target window in the at least one second window.
And step 106, responding to the third input, and sending a second voice message through a fourth target window.
Optionally, the electronic device may display the message identifier corresponding to the second voice message after obtaining the second voice message, or may display the message identifier corresponding to the second voice message while obtaining the second voice message.
Optionally, a session interface is included in the fourth target window, and the description of the session interface refers to the related description in step 102.
It can be understood that the number of the fourth target windows is not limited in the embodiments of the present application, that is, the fourth target window may be one window or may be multiple windows. When the fourth target window may be a plurality of windows, the electronic device may transmit the second voice message through the plurality of windows, respectively.
The following describes an exemplary voice message processing method according to an embodiment of the present application with reference to the drawings.
Illustratively, the sound collection under multiple windows is taken as an example. When the user sees that the speech of the chapter word in the video (i.e. the first multimedia file) is good, the user wants to send the content to the friend in addition to the content of the user: "give you a look at that the phrase sounds too good (the user's own voice): "xxxxxxxx" (the speech sound part in the video) "the user can record his own voice and then capture the voice of the video. Specifically, as shown in fig. 3 (a), in a first step, a user first moves a video progress bar 31 in a first target window 30 to a time point that needs to be collected, that is, to an initial playing position corresponding to first audio information, and then triggers the electronic device to pause video playing by clicking input on a playing control 32 in a video playing interface 30. In the second step, the user long presses on the voice recording frame 34 in the first window 33 to trigger the electronic device to record his voice, saying that the voice content: "give you a glance that the phrase sounds too good", i.e., the electronic device receives and records a first voice message in response to a first input; then the finger is not released and the finger is slid toward the first target window 30, and at this time, as shown in fig. 3 (b), the voice content just spoken by the user becomes a voice message mark 35 displayed in the first target window 30. Thirdly, as shown in fig. 4 (b), the user continues to click the video playing control 32 in the first target window 30, the electronic device starts to continue playing the video, and the played audio information is automatically spliced behind the voice message recorded by the user. Fourthly, the user clicks the video playing control 32 again to trigger the video playing to be paused, and the video speech sound collection is finished, so that a second voice message is obtained; it can be seen that the user's operation in the third and fourth steps is the second sub-input.
Then, as shown in (c) of fig. 3, the electronic device may update the message identification 35 of the first voice message to the message identification 36 of the second voice message. Moreover, as shown in (c) in fig. 3, the electronic device may display the duration "3s" of the voice recorded by the user and the duration "5s" of the audio information in the video on the message identifier 36 of the second voice message, and may also display application identifiers of different capture sources on the message identifier 36 of the second voice message, such as the application identifier "Y1" of the social application 1 corresponding to the conversation interface in the first window 33 and the application identifier "Y2" of the video application corresponding to the video playing interface in the first target window.
Further, since the electronic device may display the message identification of the second voice message, if the user needs to forward the second voice message to "leeway", the user may drag the message identification 36 of the second voice message into a fourth target window 37 including a conversation interface with "leeway" as shown in (c) of fig. 3 to trigger the electronic device to send the second voice message to "leeway" through the fourth target window 37.
Therefore, after the recorded voice message is spliced with the audio information in the multimedia file by the electronic equipment, the message identifier corresponding to the spliced voice message can be displayed, so that the user can trigger the electronic equipment to send the spliced voice message through other currently displayed windows by inputting the message identifier, and the flexibility of sending the voice message can be further improved.
Operation 2:
in this embodiment of the application, in operation 2, the second input may be a sliding input of the voice recording control by the user. The step 104 can be specifically realized by the step 104c described below.
And 104c, the electronic equipment responds to the sliding input, sends the first voice message through the first window and records a third voice message corresponding to a second target window in at least one second window.
Wherein the second target window is determined based on the sliding input.
Optionally, the second target window may include any one of:
a window on the sliding track of the sliding input in the at least one second window;
a window located in a sliding direction of the sliding input among the at least one second window;
a first window of the at least one second window located in a sliding direction of the sliding input;
and the window corresponding to the sliding end position of the sliding input in the at least one second window.
It should be noted that the second target window includes a session interface, and the session interface is different from the session interface in the first window. For example, the first window and the second target window include session interfaces in different session applications.
Optionally, after obtaining the third voice message, the electronic device may send the third voice message through the second target window.
Alternatively, the second input may be performed at the end of the input of the first input. For example, a sliding input where the user's finger does not leave the voice recording control and slides in a first direction over the voice recording control.
When the user finishes recording the first voice message, if the user further wants to send another voice message, namely a third voice message, to the contact object corresponding to the session interface in the second target window in the at least one second window, the user can slide in the direction where the second target window is located from the voice recording control, namely the first direction, so as to trigger the electronic device to send the first voice message through the first window, and activate the voice recording control in the second target window to record the third voice message corresponding to the second target window. After the second input is finished, the electronic device may automatically send the third voice message through the second target window, so that the contact object corresponding to the session interface in the second target window can receive the third voice message.
Therefore, after the recording of the voice message of one window is finished, the electronic device can be triggered to send the recorded voice message through the window and continuously record the voice message corresponding to the other window, and compared with a scheme that a user inputs voice recording controls in different windows in sequence to trigger the electronic device to respectively record and send the voice message, the voice message processing method provided by the embodiment of the application can improve the flexibility of voice message processing and simplify the operation process of sending different voice messages to contact objects corresponding to different session interfaces.
Optionally, after the step 104c, the voice message processing method provided in the embodiment of the present application may further include the following step 107.
And step 107, the electronic equipment responds to the second input and updates the display positions of the first window and the second target window.
Therefore, the user can know that the electronic equipment currently records the voice message corresponding to the second target window.
Illustratively, if the user needs to record different voice contents to be sent to contacts in different social programs under different windows, in order to reduce the operation cost of the user, as shown in (a) of fig. 4, the user may press the voice recording control 41 in the first window 40 for a long time to trigger the electronic device to record a first voice message through the first window 40 (by default to be sent to the contacts under the current window, such as "lie four"), and then the gesture does not release the voice recording control 41 and slide upward, as shown in (b) of fig. 4, the electronic device exchanges the display positions of the first window 40 and a first second target window 42 including a conversation interface, which is originally located above the first window 40, and will activate the voice recording control in the second target window 42, and automatically send the first voice message to the contact "lie four" corresponding to the conversation interface in the first window. Therefore, the user can know that the electronic equipment currently records the voice message corresponding to the second target window.
Operation 3:
optionally, in operation 3, the second input may be an input on an interface identifier slid from the voice recording control to the session interface in the third target window. The interface identifier may be replaced by an identifier of a session application program corresponding to the session interface in the third target window, such as an application icon.
In this way, since the user can trigger the electronic device to send the recorded voice message through the third window of the first window and the at least one second window through the second input, the efficiency of recording and sending the voice message can be improved.
Optionally, in operation 3, after the step 101, the voice message processing method provided in the embodiment of the present application may further include the following step 108, and the step 104 may be specifically implemented by the following step 104 d.
Step 108, the electronic device displays at least one application identifier corresponding to the at least one second window in response to the first input.
Wherein the second input may include: and selecting and inputting a target application identifier in the at least one application identifier. For example, the slide input may include: and when the first input is finished, sliding the voice recording control to the input on the target application identification.
It should be noted that at least one second window corresponds to at least one application identifier one to one.
Step 104d, the electronic equipment responds to the selection input and sends a first voice message through the first window and the third target window;
and the third target window identifies a window corresponding to the target application.
The following describes a voice message processing method provided in an embodiment of the present application by way of example with reference to the accompanying drawings.
Illustratively, voice message distribution under multiple windows: as shown in fig. 5 (a), the first window 50 includes a session interface of social APP1, and the session interface corresponds to "zhangsan" of a contact, the second window 52 includes a session interface of social APP2, and the session interface corresponds to "liquan" of the contact, at this time, the user needs to send a voice message to the contacts of different social programs at the same time, and then the specific process is as follows: the user presses the voice recording control 51 in the first window 50 for a long time to trigger voice recording, and if "play no ball for a long time and go to a ball bar in the afternoon", at this time, the terminal detects that the terminal is currently under multiple windows and there are session chat windows of multiple social programs, at this time, as shown in (a) in fig. 5, the electronic device may display an application identifier "Y2" of the social APP2 in an adjacent area of the voice recording control 51; as shown in fig. 5 (b), the voice recording may be ended without releasing the gesture, and sliding the finger onto the application identifier "Y2". When the user's finger leaves "Y2", the electronic device automatically sends the recorded voice message to "zhang san" and "lie si".
Optionally, the voice message sending method provided in this embodiment of the present application may further include step 109 described below.
And step 109, in the process of playing the fourth voice message in the first window, if the first condition is met, the electronic device decreases the playing volume of the second multimedia file.
The second multimedia file is a multimedia file played by a fifth target window in at least one second window.
In the embodiment of the present application, the first condition may include: the playing priority corresponding to the fourth voice message is higher than the playing priority corresponding to the second multimedia file.
In this embodiment of the application, the playing priority corresponding to the second multimedia file may specifically be a playing priority corresponding to a content currently played by the second multimedia file. For example, a play priority may be set for at least some of the objects in the second multimedia file. Therefore, when the sound of the objects and the voice message are played simultaneously, the electronic equipment can turn down the sound of the objects, so that the user can be prevented from missing the voice content in the voice message.
In this way, when the electronic device simultaneously displays the play window and the session window (i.e., the first window) of the multimedia file, the electronic device can reduce the play volume of the multimedia file when the first condition is satisfied, so that the user can be prevented from missing the voice content in the important voice message.
Optionally, before step 109, the voice message sending method provided in this embodiment may further include step 110 and step 111 described below.
Step 110, the electronic device receives a fourth input from the user.
And step 111, the electronic equipment responds to the fourth input and executes a second operation corresponding to the fourth input.
Wherein the second operation may include any one of the following operations 4 and 5:
and operation 4, associating at least part of the contact objects corresponding to the first window with the first playing priority.
And operation 5, associating at least part of the playing content in the second multimedia file with a second playing priority.
In the embodiment of the present application, the first playing priority is higher than the second playing priority.
In this embodiment of the application, the at least part of the contact objects includes a contact object for sending the fourth voice message.
Optionally, the fourth input may include a third sub-input and a fourth sub-input.
Optionally, in operation 4, the third sub-input is input for triggering the electronic device to display the identifiers of all the contact objects corresponding to the first window; the fourth sub-input is an input that triggers the electronic device to associate the object indicated by all or part of the identifiers with the first play priority.
Specifically, the third sub-input may be any one of: the user's long press input to the first window, the voice instruction input by the user, or the specific gesture input by the user may be specifically determined according to the actual use requirement, which is not limited in the embodiment of the present application. The fourth sub-input may be an input of a user on the identifier indicating the first playing priority, a voice instruction input by the user, or a specific gesture input by the user, which may be specifically determined according to actual usage requirements, and is not limited in this embodiment of the present application. For other descriptions of the third sub-input and the fourth sub-input, refer to the related description of the first input in the above embodiments.
Optionally, in operation 5, taking the second multimedia file as a video file as an example, the third sub-input is an input of a user to a specific playing object displayed in the fifth target window, such as a long-press input, a voice instruction input by the user, or a specific gesture; after the electronic device receives the third sub-input, an option of "social voice first" may be displayed, and then the user may perform a fourth sub-input on the option of "social voice first", such as a long press input on the option, a user input of a specific voice instruction or a specific gesture for selecting the second play priority; the electronic device may then associate the playing content (i.e., the target playing content) corresponding to the specific playing object with the second playing priority.
The following describes a voice message processing method provided in an embodiment of the present application by way of example with reference to the accompanying drawings.
Illustratively, when the electronic device displays a plurality of windows, and different windows include different interfaces, if there is sound playing, the problem of preemption of the sound focus is inevitable, such as video playing through one window, sending a voice message with a contact through another window, and if the voice message and the video picture are played simultaneously, the problem will be mutually influenced. And if the video is played after the listening of the voice message is finished, the intelligent playing is not enough. If video clips which are not very desirable to listen are encountered, the user can put more into the voice message chat, so that different priorities can be set for sound playing in different windows: the playing program detects that other applications, such as a social voice program, exist under the multi-window through the system, as shown in fig. 6, the user long presses (i.e. the third sub-input) on a playing character 61 in a video window 60 (i.e. a fifth target window), and at this time, an option is displayed in the playing window: a "social voice play precedence" option; for example, if the user does not like the playing of the playing character 61 or feels it is not important, and the user clicks the "social voice playing preference" option (i.e. the fourth sub-input), the player records the option and synchronizes the option to the system, i.e. the electronic device, so that the electronic device associates the audio content corresponding to the playing character 61 with the second playing priority. After receiving the voice message through the conversation window, if the sound of the character with the "social voice playing priority" set thereon is played in the video window 60, the electronic device controls the social application corresponding to the conversation window to directly play the newly received voice message, and notifies the player to reduce the volume of the audio content of the character. The video window refers to a window including a video playing interface.
Correspondingly, if the session window is a group chat scene, part of the members in the group chat may also be associated with the first play priority, so that the voice messages of the members can be played in preference to the audio content in the multimedia file. A conversation window refers to a window that includes a conversation interface. Wherein the playing strategy of the voice message of the member in the group chat which is not associated with the first playing priority is the same as the strategy of playing the voice message in the related art.
Therefore, the user can set the playing priority of the specific playing content in the multimedia or set the playing priority of at least part of the contact objects corresponding to the target session interface by triggering the electronic equipment according to the using requirement of the user, so that the flexibility of playing the voice message by the electronic equipment under the condition of displaying a plurality of interfaces can be improved.
It should be noted that the embodiment of the present application also provides another voice message processing method, and specifically, the voice message processing method may include the following steps a to G. The electronic device is taken as an example to execute the method.
Step A, the electronic equipment receives a first target input to a playing window under the condition that the session window of the session application and the playing window of the multimedia file are displayed.
Optionally, the first target input may be a long-press input on the play window by the user, a voice instruction input by the user, or a specific gesture input by the user, which may be specifically determined according to actual use requirements, and is not limited in this embodiment of the application. For other descriptions of the first target input, reference may be made specifically to the relevant description of the first input in the above embodiments.
And B, the electronic equipment responds to the first target input and displays the audio extraction control and the application identification of the conversation application.
Wherein the audio extraction control can be used to obtain audio information from the multimedia file.
Optionally, the electronic device, in response to the first target input, may further display: a video extraction control and a picture extraction control. Wherein the video extraction control is used for obtaining the video clip from the multimedia file, and the picture extraction control is used for obtaining the video picture (i.e. not including the sound) from the multimedia file.
It is to be appreciated that when the electronic device displays conversation windows for a plurality of conversation applications, the electronic device can display an application identification for each of the conversation applications.
And step C, the electronic equipment receives second target input of the user to the audio extraction control and the application identification.
Optionally, the second target input may include: the touch input (such as long-press input) of the audio extraction control and the application identifier by the user, the voice instruction input by the user, or the specific gesture input by the user may be specifically determined according to the requirement of the user, which is not limited in the embodiment of the present application.
For other descriptions of the second target input, reference may be made specifically to the relevant description of the first input in the above embodiments. And D, the electronic equipment responds to the second target input, at least one contact object is selected from the conversation application, and the object identification of the at least one contact object in the conversation application is displayed at the first playing position of the playing window.
The first playing position is the same as the display position of the playing progress mark in the playing window of the multimedia file.
And E, the electronic equipment receives a third target input of the user to the playing window.
Optionally, the third target input may include: dragging the playing progress identification in the playing window from the first playing position to the second playing position, long-time pressing input on the first playing position and the second playing position, voice instructions input by a user, or input which can arbitrarily specify an audio information extraction range such as specific gestures input by the user.
And F, the electronic equipment responds to the third target input, acquires target audio information between the first playing position and the second playing position in the multimedia file, and generates a target voice message from the target audio information.
And G, the electronic equipment sends the target voice message to at least one contact object.
The following describes an exemplary voice message processing method according to an embodiment of the present application with reference to the drawings.
Illustratively, as shown in fig. 7 (a), the electronic device displays a conversation window 70 corresponding to "zhangsan" in the social application 1, a conversation window 71 corresponding to "liquad" in the social application 1, and a play window 72 of a video. If the user wants to extract a piece of audio information in the video to a certain contact person, such as "zhang san", the user may drag the progress bar in the playing window 72 to the beginning of the piece of audio information, and then the user presses the progress bar for a long time, that is, the electronic device receives the first target input; thus, in response to the first target input, as shown in fig. 7 (b), the electronic device may display a presence option list 73, an application icon identification Y1 of the social application 1, and an application icon identification Y2 of the social application 2, where the option list 73 includes a "video extraction" option, a "sound extraction only" option (i.e., an audio extraction control), and a "picture extraction only" (i.e., a picture extraction control); at this time, the user clicks the "sound extraction only" option to trigger the electronic device to start the audio extraction function. Then, after the user clicks the application icon Y1 of the social application 1, the electronic device may display a contact list (not shown in the figure) of the social application 1, so that as shown in (c) in fig. 7, the user may drag the contact avatar 74 of one contact in the contact list to the position of the progress bar; and then further drags the progress bar to the sound end position of the piece of audio information. After dragging the progress bar, the section between the contact avatar 74 and the position where the progress bar is currently located represents the sound extraction range; as shown in fig. 7 (c), the user may press the contact avatar 74 for a long time, as shown in fig. 7 (d), the electronic device may display a "send" option 75, and when the user clicks the "send" option 75, the electronic device may send the voice message to the contact.
Therefore, when the electronic equipment simultaneously displays the conversation window of the conversation application and the playing window of the multimedia file, the user can trigger the electronic equipment to generate the voice message from the audio information in the currently played multimedia file and send the voice message to at least one contact object in the conversation application, so that the flexibility of sending the voice message can be improved.
In addition, when the electronic device displays a plurality of windows, if sound in one window needs to be required by an application in another window, sound sharing between different applications is realized. Specifically, assuming that the electronic device displays a conversation window and a play window of a multimedia file, as shown in fig. 8 (a), a user may gesture to drag a voice message 81 in the conversation window 80 into the play window 82, at which time the play window 82 pauses playing a video. As shown in (b) of fig. 8, the voice message 81 is first displayed in a floating manner in the playing window 82, the user can drag a playing progress bar of the video by a gesture, select a time required to be inserted, and then the electronic device can insert the voice message 81 into the video.
In the voice message processing method provided by the embodiment of the application, the execution main body can be a voice message processing device. In the embodiment of the present application, a voice message processing method executed by a voice message processing apparatus is taken as an example, and the voice message processing apparatus provided in the embodiment of the present application is described.
Fig. 9 shows a schematic structural diagram of the voice message processing apparatus provided in the embodiment of the present application, and as shown in fig. 9, the voice message processing apparatus 90 may include: a display module 91, a receiving module 92 and a control module 93.
The receiving module 92 is configured to receive a first input of a user to a voice recording control in a first window when the first window and at least one second window are displayed by the display module 91;
the control module 93 is configured to record a first voice message in response to the first input received by the receiving module 92;
the receiving module 92 is further configured to receive a second input from the user;
the control module 93, further configured to perform a first operation in response to the second input;
wherein the first operation comprises any one of:
splicing the first voice message with first audio information in a first multimedia file to obtain a second voice message, wherein the first multimedia file is a multimedia file played by a first target window in the at least one second window;
sending the first voice message through the first window, and recording a third voice message corresponding to a second target window in the at least one second window;
and sending the first voice message through the first window and a third target window in the at least one second window.
In a possible implementation manner, the control module 93 is further configured to send the second voice message through the first window after obtaining the second voice message.
In one possible implementation, the second input includes a first sub-input and a second sub-input; a control module 93, specifically configured to respond to the first sub-input and start an audio extraction function; and responding to the second sub-input, acquiring the first audio information corresponding to the second sub-input in the first multimedia file, and splicing the first voice message and the first audio information to obtain the second voice message.
In a possible implementation manner, the receiving module 92 is further configured to receive, after the control module 93 obtains the second voice message, a third input of a message identifier corresponding to the second voice message by a user, where the third input is an input of dragging the message identifier to an area corresponding to a fourth target window in the at least one second window;
the control module 93 is further configured to send the second voice message through the fourth target window in response to the third input received by the receiving module 92.
In one possible implementation, the second input includes: the user drags and inputs the voice recording control;
a control module 93, specifically configured to respond to the sliding input, send the first voice message through the first window, and record a third voice message corresponding to a second target window in the at least one second window;
wherein the second target window is determined based on the sliding input.
In a possible implementation manner, the control module 93 is further configured to update the display positions of the first window and the second target window in response to the second input after the receiving module 92 receives the second input.
In a possible implementation manner, the control module 93 is further configured to display at least one application identifier corresponding to the at least one second window in response to the first input after the receiving module 92 receives the first input;
the second input comprises: selecting and inputting a target application identifier in the at least one application identifier; the control module 93 is specifically configured to send the first voice message through the first window and the third target window in response to the selection input;
and the third target window is a window corresponding to the target application identifier.
In a possible implementation manner, the voice message processing apparatus further includes a playing module; the control module 93 is further configured to, in a process of playing the fourth voice message in the first window by the playing module, if a first condition is met, turn down a playing volume of a second multimedia file, where the second multimedia file is a multimedia file played in a fifth target window in the at least one second window;
wherein the first condition comprises:
and the playing priority corresponding to the fourth voice message is higher than the playing priority corresponding to the second multimedia file.
In the voice message processing apparatus provided in the embodiment of the present application, when the first window and the at least one second window are displayed, since the user may record the first voice message through one input of the first window, and then trigger splicing the first voice message with the specific voice message of the media file played in the simultaneously displayed window through another input, so as to obtain the second voice message, or send the first voice message through the first window and the second target window, or continue to record the voice message through the third target window while sending the first voice message through the first window, the flexibility of processing the voice message may be improved.
The voice message processing apparatus in the embodiment of the present application may be an electronic device, and may also be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (NAS), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The voice message processing device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The voice message processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 8, and is not described herein again to avoid repetition.
Optionally, as shown in fig. 10, an electronic device 600 is further provided in this embodiment of the present application, and includes a processor 601 and a memory 602, where the memory 602 stores a program or an instruction that can be executed on the processor 601, and when the program or the instruction is executed by the processor 601, the steps of the foregoing voice message processing method embodiment are implemented, and the same technical effects can be achieved, and are not described again here to avoid repetition.
Fig. 11 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, and a processor 710.
Those skilled in the art will appreciate that the electronic device 700 may further comprise a power supply (e.g., a battery) for supplying power to various components, and the power supply may be logically connected to the processor 710 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
A user input unit 707, configured to receive a first input of a user to a voice recording control in a first window in a case that the display unit 706 displays the first window and at least one second window;
a processor 710 for recording a first voice message in response to a first input received by the user input unit 707;
a user input unit 707 further configured to receive a user second input;
a processor 710 further for performing a first operation in response to a second input;
wherein the first operation comprises any one of:
splicing the first voice message with first audio information in a first multimedia file to obtain a second voice message, wherein the first multimedia file is a multimedia file played by a first target window in at least one second window;
sending a first voice message through a first window, and recording a third voice message corresponding to a second target window in at least one second window;
and sending the first voice message through the first window and a third target window in the at least one second window.
In a possible implementation, the processor 710 is further configured to send the second voice message through the first window after obtaining the second voice message.
In one possible implementation, the second input includes a first sub-input and a second sub-input;
a processor 710, specifically configured to activate an audio extraction function in response to the first sub-input; and responding to the second sub-input, acquiring first audio information corresponding to the second sub-input in the first multimedia file, and splicing the first voice message and the first audio information to obtain a second voice message.
In a possible implementation manner, the user input unit 707 is further configured to receive, after the processor 710 obtains the second voice message, a third input of a message identifier corresponding to the second voice message by the user, where the third input is an input of dragging the message identifier to a region corresponding to a fourth target window in the at least one second window;
the processor 710 is further configured to transmit a second voice message through a fourth target window in response to a third input received by the user input unit 707.
In one possible implementation, the second input includes: the user drags and inputs the voice recording control;
the processor 710 is specifically configured to respond to a sliding input, send a first voice message through a first window, and record a third voice message corresponding to a second target window in at least one second window;
wherein the second target window is determined based on the sliding input.
In one possible implementation, the processor 710 is further configured to update the display positions of the first window and the second target window in response to a second input after the user input unit 707 receives the second input.
In a possible implementation, the processor 710 is further configured to display at least one application identifier corresponding to at least one second window in response to the first input after the first input is received by the user input unit 707;
the second input includes: selecting and inputting a target application identifier in the at least one application identifier;
a processor 710, specifically configured to send a first voice message through the first window and the third target window in response to a selection input;
and the third target window identifies a window corresponding to the target application.
In a possible implementation manner, the voice message processing apparatus further includes an audio output unit 703;
the processor 710 is further configured to, in a process that the audio output unit 703 plays the fourth voice message in the first window, if a first condition is met, turn down a playing volume of a second multimedia file, where the second multimedia file is a multimedia file played in a fifth target window in the at least one second window;
wherein the first condition comprises:
the playing priority corresponding to the fourth voice message is higher than the playing priority corresponding to the second multimedia file.
In the electronic device provided in the embodiment of the present application, when the first window and the at least one second window are displayed, a user may record a first voice message through one input of the first window, and then trigger splicing of the first voice message with specific voice information of a media file played in a window displayed at the same time through another input, so as to obtain a second voice message, or send the first voice message through the first window and the second target window, or continue to record the voice message through the third target window while sending the first voice message through the first window, so that flexibility in processing the voice message may be improved.
It should be understood that in the embodiment of the present application, the input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics Processing Unit 7041 processes image data of still pictures or videos obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes at least one of a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two portions, a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a first storage area for storing a program or an instruction and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like. Further, the memory 709 may include volatile memory or nonvolatile memory, or the memory 709 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct bus RAM (DRRAM). The memory 709 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 710 may include one or more processing units; optionally, the processor 710 integrates an application processor, which primarily handles operations involving the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned message output method and/or message sending method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the foregoing voice message processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing voice message processing method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method for processing a voice message, the method comprising:
under the condition that a first window and at least one second window are displayed, receiving a first input of a user to a voice recording control in the first window;
responding to the first input, and recording to obtain a first voice message;
receiving a second input of the user;
in response to the second input, performing a first operation;
wherein the first operation comprises any one of:
splicing the first voice message with first audio information in a first multimedia file to obtain a second voice message, wherein the first multimedia file is a multimedia file played by a first target window in the at least one second window;
sending the first voice message through the first window, and recording a third voice message corresponding to a second target window in the at least one second window;
and sending the first voice message through the first window and a third target window in the at least one second window.
2. The method of claim 1, wherein after obtaining the second voice message, the method further comprises:
and sending the second voice message through the first window.
3. The method of claim 1, wherein the second input comprises a first sub-input and a second sub-input;
the performing, in response to the second input, a first operation comprising:
responding to the first sub-input, and starting an audio extraction function;
and responding to the second sub-input, acquiring the first audio information corresponding to the second sub-input in the first multimedia file, and splicing the first voice message and the first audio information to obtain the second voice message.
4. The method of any of claims 1-3, wherein after obtaining the second voice message, the method further comprises:
receiving a third input of a message identifier corresponding to the second voice message from a user, wherein the third input is an input of dragging the message identifier to a region corresponding to a fourth target window in the at least one second window;
in response to the third input, sending the second voice message through the fourth target window.
5. The method of claim 1,
the second input comprises: the user drags and inputs the voice recording control;
the performing, in response to the second input, a first operation comprising:
responding to the sliding input, sending the first voice message through the first window, and recording a third voice message corresponding to a second target window in the at least one second window;
wherein the second target window is determined based on the sliding input.
6. The method of claim 5, wherein after receiving the second input from the user, the method further comprises:
in response to the second input, updating display positions of the first window and the second target window.
7. The method of claim 1,
after receiving a first input from the user to the voice recording control in the first window, the method further includes:
displaying at least one application identification corresponding to the at least one second window in response to the first input;
the second input comprises: selecting and inputting a target application identifier in the at least one application identifier;
the performing a first operation in response to the second input comprises:
in response to the selection input, sending the first voice message through the first window and the third target window;
and the third target window is a window corresponding to the target application identifier.
8. The method of claim 1, further comprising:
in the process of playing the fourth voice message in the first window, if a first condition is met, the playing volume of a second multimedia file is reduced, wherein the second multimedia file is a multimedia file played by a fifth target window in the at least one second window;
wherein the first condition comprises:
and the playing priority corresponding to the fourth voice message is higher than the playing priority corresponding to the second multimedia file.
9. A voice message processing apparatus, the apparatus comprising: the device comprises a display module, a receiving module and a control module;
the receiving module is used for receiving a first input of a user to a voice recording control in a first window under the condition that the display module displays the first window and at least one second window;
the control module is used for responding to the first input received by the receiving module and recording to obtain a first voice message;
the receiving module is further used for receiving a second input of the user;
the control module is further used for responding to the second input and executing a first operation;
wherein the first operation comprises any one of:
splicing the first voice message with first audio information in a first multimedia file to obtain a second voice message, wherein the first multimedia file is a multimedia file played by a first target window in the at least one second window;
sending the first voice message through the first window, and recording a third voice message corresponding to a second target window in the at least one second window;
and sending the first voice message through the first window and a third target window in the at least one second window.
10. An electronic device comprising a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions when executed by the processor implementing the steps of the voice message processing method of any of claims 1 to 8.
CN202211280732.1A 2022-10-19 2022-10-19 Voice message processing method and device and electronic equipment Pending CN115955455A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211280732.1A CN115955455A (en) 2022-10-19 2022-10-19 Voice message processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211280732.1A CN115955455A (en) 2022-10-19 2022-10-19 Voice message processing method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN115955455A true CN115955455A (en) 2023-04-11

Family

ID=87288032

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211280732.1A Pending CN115955455A (en) 2022-10-19 2022-10-19 Voice message processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN115955455A (en)

Similar Documents

Publication Publication Date Title
EP3825830A1 (en) User interfaces for navigating and playing channel-based content
EP3103250B1 (en) Highlighting univiewed video messages
EP3103251B1 (en) Automatic camera selection
US20150264307A1 (en) Stop Recording and Send Using a Single Action
US20150264309A1 (en) Playback of Interconnected Videos
CN111857510B (en) Parameter adjusting method and device and electronic equipment
CN110908638A (en) Operation flow creating method and electronic equipment
WO2024104182A1 (en) Video-based interaction method, apparatus, and device, and storage medium
WO2024104113A1 (en) Screen capture method, screen capture apparatus, electronic device, and readable storage medium
CN112711368A (en) Operation guidance method and device and electronic equipment
CN111641551A (en) Voice playing method, voice playing device and electronic equipment
CN115941869A (en) Audio processing method and device and electronic equipment
CN115955455A (en) Voice message processing method and device and electronic equipment
CN114374761A (en) Information interaction method and device, electronic equipment and medium
CN114024929A (en) Voice message processing method and device, electronic equipment and medium
CN113885994A (en) Display method and device and electronic equipment
CN113873082A (en) Media file playing method and device
CN117608459A (en) Screenshot method, screenshot device, screenshot equipment and screenshot medium
CN114860127A (en) Information transmission method and information transmission device
CN115242747A (en) Voice message processing method and device, electronic equipment and readable storage medium
CN117651180A (en) Video playing method and device and electronic equipment
CN115102917A (en) Message sending method, message processing method and device
CN117440207A (en) Video processing method and device and electronic equipment
CN117406952A (en) Audio track control method and device, electronic equipment and readable storage medium
CN117376299A (en) Group management method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination