CN115623321A - Message processing method and device, electronic equipment and readable storage medium - Google Patents

Message processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN115623321A
CN115623321A CN202211253942.1A CN202211253942A CN115623321A CN 115623321 A CN115623321 A CN 115623321A CN 202211253942 A CN202211253942 A CN 202211253942A CN 115623321 A CN115623321 A CN 115623321A
Authority
CN
China
Prior art keywords
message
user
audio data
preview interface
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211253942.1A
Other languages
Chinese (zh)
Inventor
卢枝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211253942.1A priority Critical patent/CN115623321A/en
Publication of CN115623321A publication Critical patent/CN115623321A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • H04L51/046Interoperability with other network applications or services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1083In-session procedures

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application discloses a message processing method, a message processing device, electronic equipment and a readable storage medium, and belongs to the technical field of communication. Wherein, the method comprises the following steps: receiving a first message under the condition that a shooting preview interface is displayed; acquiring audio data corresponding to a first user under the condition that the shooting preview interface comprises the first user associated with the first message; generating a second message based on the audio data; and sending the second message to the target session of the first message.

Description

Message processing method and device, electronic equipment and readable storage medium
Technical Field
The application belongs to the technical field of communication, and particularly relates to a message processing method, a message processing device, electronic equipment and a readable storage medium.
Background
At present, scenes shot by people through electronic equipment are more and more common, and people can record life, work and the like at any time and any place through shooting.
In one scene, in the shooting process, if a chat message is received, the person involved in the chat message just appears in the shooting scene, but the chat message is important and needs to be replied immediately, and at the moment, after the notification of the related person, the person in the shooting scene needs to find own equipment for replying.
Therefore, in the prior art, the shooting time is wasted because the personnel in the shooting scene search the own equipment.
Disclosure of Invention
The embodiment of the application aims to provide a message processing method, which can solve the problem that in the prior art, shooting time is wasted because personnel in a shooting scene search own equipment.
In a first aspect, an embodiment of the present application provides a message processing method, where the method includes: receiving a first message under the condition that a shooting preview interface is displayed; acquiring audio data corresponding to a first user under the condition that the shooting preview interface comprises the first user associated with the first message; generating a second message based on the audio data; and sending the second message to the target session of the first message.
In a second aspect, an embodiment of the present application provides a message processing apparatus, including: the receiving module is used for receiving a first message under the condition of displaying a shooting preview interface; the first obtaining module is used for obtaining audio data corresponding to a first user under the condition that the shooting preview interface comprises the first user related to the first message; a generating module for generating a second message based on the audio data; and the sending module is used for sending the second message to the target session in which the first message is positioned.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a computer program product, which is stored in a storage medium and executed by at least one processor to implement the method according to the first aspect.
In this way, in the embodiment of the application, under the condition that the shooting preview interface is displayed, the shooting preview interface displays a scene being shot, if the first message is received and is associated with the first user in the shooting preview interface, the first user is directly shot by using the convenience of the first user included in the shooting scene so as to acquire the audio data corresponding to the first user, and then the second message for reply is generated according to the audio data and is sent to the target conversation where the first message is located. Therefore, based on the embodiment of the application, the first user can be informed of the need of replying the message in the shooting process, so that the first user can directly carry out oral reply without finding the electronic equipment and then replying, and the waste of shooting time caused by searching the own equipment by people in a shooting scene is avoided.
Drawings
Fig. 1 is a flowchart of a message processing method according to an embodiment of the present application;
FIG. 2 is one of the interface schematic diagrams of the electronic device of the embodiment of the present application;
FIG. 3 is a second schematic interface diagram of an electronic device according to an embodiment of the present application;
FIG. 4 is a third schematic interface diagram of an electronic device according to an embodiment of the present application;
FIG. 5 is a fourth schematic interface diagram of an electronic device according to an embodiment of the present application;
FIG. 6 is a fifth schematic interface diagram of an electronic device according to an embodiment of the present application;
fig. 7 is a block diagram of a message processing apparatus according to an embodiment of the present application;
fig. 8 is one of the hardware configuration diagrams of the electronic device according to the embodiment of the present application;
fig. 9 is a second hardware configuration diagram of the electronic device according to the embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be described below clearly with reference to the drawings of the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments that can be derived from the embodiments of the present application by one of ordinary skill in the art are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/", and generally means that the former and latter related objects are in an "or" relationship.
In the message processing method provided in the embodiment of the present application, an execution main body of the message processing method may be the message processing apparatus provided in the embodiment of the present application, or an electronic device integrated with the message processing apparatus, where the message processing apparatus may be implemented in a hardware or software manner.
The message processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Fig. 1 shows a flowchart of a message processing method according to an embodiment of the present application, which is applied to an electronic device for example, and includes:
step 110: in a case where a photographing preview interface is displayed, a first message is received.
In this step, in the photographing mode, a photographing preview interface for displaying a photographing scene is displayed.
For example, the user clicks a "record" button to display a shooting preview interface, and a shooting scene is displayed in the shooting preview interface.
Optionally, the first message is a message from an application other than the camera application.
Optionally, the first message is a chat message.
Step 120: and under the condition that a first user associated with the first message is included in the shooting preview interface, acquiring audio data corresponding to the first user.
In this step, the shooting preview interface includes the first user, that is, the first user is located in the shooting scene, that is, the first user is a person to be shot.
Wherein the first user is associated with the first message, for example, the first user is mentioned in the first message in the form of "@"; as another example, the textual content in the first message mentions the name of the first user; as another example, something mentioned in the first message is being handled by the first user.
Optionally, the first user is a native user for shooting.
Optionally, the first user is not a native user for photography.
Optionally, the first message is from a certain buddy session.
Optionally, the first message is from a certain group session.
For example, in one scenario, the first user is a local user for shooting, the local user is hosting for shooting, after receiving the first message, the first user determines that the message is important and needs to immediately reply, and the reply content is spoken on the premise of keeping shooting, so that in this step, audio data of the first user when replying to the message is acquired.
For another example, in a scenario, the first user is not a local user for shooting, and the local user is hosting for shooting, after receiving the first message, the local user determines that the message is important, and verbally informs the first user of an immediate reply, and the first user speaks the reply content on the premise of keeping shooting, so that in this step, audio data when the first user replies to the message is obtained.
For another example, in a scenario, the first user is a local user for shooting, and a non-local user is hosting for shooting, after receiving the first message, the non-local user hosting for shooting judges that the message is important, and verbally informs the first user of an immediate reply, and the first user speaks a reply content on the premise of keeping shooting, so that in this step, audio data when the first user replies to the message is obtained.
Optionally, a picture including the first user is collected, and a sound emitted by the first user is collected synchronously, and the collected sound is audio data of the step.
Step 130: based on the audio data, a second message is generated.
In this step, a second message is generated as a message in reply to the first message based on the audio data.
Step 140: and sending the second message to the target session in which the first message is positioned.
In which a second message is sent to complete the reply to the first message.
In this way, in the embodiment of the application, under the condition that the shooting preview interface is displayed, the shooting preview interface displays a scene being shot, if the first message is received and is associated with the first user in the shooting preview interface, the first user is directly shot by using the convenience of the first user included in the shooting scene so as to acquire the audio data corresponding to the first user, and then the second message for reply is generated according to the audio data and is sent to the target conversation where the first message is located. Therefore, based on the embodiment of the application, the first user can be informed of the message reply requirement in the shooting process, so that the first user can directly reply orally without finding the electronic equipment and then replying, and the waste of shooting time caused by searching the own equipment by people in a shooting scene is avoided.
In a flow of a message processing method according to another embodiment of the present application, step 120 includes:
substep A1: and under the condition that the first message comprises first user information, determining a first face image matched with the first account image in the shooting preview interface according to the first account image corresponding to the first user information, wherein the first face image corresponds to the first user.
Optionally, the first user information includes an account name and an account number of the first user in the chat application.
For example, in the first message, the account name of the first user is mentioned using an "@" function.
Application scenario for example, referring to fig. 2, during shooting, a shooting preview interface 201 is displayed, at this time, a first message 202 is received, and in the first message 202, "@" shows "zhang san".
Further, based on the first user information, a first account image of the first user in the chat application is determined.
The application scenarios of the embodiment are as follows: the first account image includes a face region of the first user.
Therefore, on the basis of the first account image, the first face image is compared with each face image in the shooting preview interface, and if the first face image is found to be matched with the first account image, the first face image is considered to correspond to the first user.
Optionally, if the matching is successful, displaying a pop-up window in the vicinity of the first face image in the shooting preview interface, wherein the content of the pop-up window is used for prompting the successful matching and inquiring whether the shooter agrees.
For example, referring to fig. 3, a popup 301 is displayed including prompt content: "reply with or without this message", and in popup 301, an "ok" option and a "cancel" option are included.
Optionally, the interpretation of the match includes: the similarity is greater than a certain threshold.
Or,
substep A2: a first input to a first user in a first message and shot preview interface is received.
The first input comprises touch input performed by a user on a screen, and is not limited to input of clicking, sliding, dragging and the like; the first input may also be a blank input of the user, such as a gesture action, a face action, and the like, and the first input further includes an input of a physical key on the device by the user, and is not limited to an input of a press and the like. Furthermore, the first input comprises one or more inputs, wherein the plurality of inputs may be consecutive or may be temporally spaced.
In this step, the first input is used to manually match the first user and the first message in the photographing preview interface.
For example, referring to fig. 4, a photographer presses a first message 401, dragging to a first user 402 in a capture preview interface.
Substep A3: in response to the first input, a first user is determined in the capture preview interface.
Optionally, after the first user in the shooting preview interface is matched with the first message based on the first input, a popup is displayed in the vicinity of the first user in the shooting preview interface, and the content in the popup is used for prompting that the matching is successful and inquiring whether the photographer agrees.
In this embodiment, the first user associated with the first message may be determined in the shooting preview interface based on a manual mode and an automatic mode, respectively, so as to obtain the audio data corresponding to the first user. Therefore, the automatic association mode of the embodiment avoids manual operation, the manual association mode meets personal requirements and is accurately associated, and the two modes are respectively suitable for different scenes.
In the message processing method according to another embodiment of the present application, in a case where a first user associated with a first message is determined in a shooting preview interface, a camera application and a chat application are associated by default to directly send audio data collected based on the camera application to a target session of the chat application.
In the flow of the message processing method according to another embodiment of the present application, three ways are provided for generating the second message.
Optionally, before the second message is generated, a popup is displayed in the shooting preview interface, and the content in the popup is used for providing three options corresponding to the three modes of the embodiment one to one, so as to be selected by the photographer.
For example, referring to fig. 5, after audio data is acquired, a pop-up window 501 is displayed, and three options are displayed in the pop-up window 501.
Optionally, on the basis of the preset, the second message is generated directly according to the preset mode after the audio data is acquired.
Step 130, comprising at least any one of:
substep B1: the audio data is determined as the content of the second message.
In this step, a first way is provided to retain audio data uttered by the first user and generate a voice message.
Substep B2: in the case where the audio data is from video data, the video content corresponding to the audio data is determined as the content of the second message.
In this step, a second mode is provided, in which the picture of the audio data sent by the first user is retained and collected, and a video message is generated in combination with the audio data sent by the first user.
Substep B3: and determining the text content corresponding to the audio data as the content of the second message.
In this step, a third way is provided to translate the audio data from the first user into text content, resulting in a text message.
In the present embodiment, the form of the second message is not limited to one, and at least includes common voice, video and text, so as to enrich the diversity of the reply message.
In a flow of a message processing method according to another embodiment of the present application, step 120 includes:
substep C1: a second input is received, the second input for determining start time information and end time information for acquiring the audio data.
The second input comprises touch input performed by a user on the screen, and is not limited to input of clicking, sliding, dragging and the like; the second input may also be a blank input of the user, such as a gesture action, a face action, and the like, and the second input further includes an input of a physical key on the device by the user, and is not limited to an input of a press and the like. Furthermore, the second input comprises one or more inputs, wherein the plurality of inputs may be consecutive or may be temporally spaced.
In the application, the audio data corresponding to the first user needs to be acquired, and the audio data is used for generating the reply message and cannot appear in the shot video content, so that a target time period needs to be set to acquire the audio data corresponding to the first user in the target time period.
Alternatively, the start time information and the end time information correspond to a start time and an end time.
For example, in the shooting process, at a certain moment, a shooting person or a first user makes a specified gesture, and the moment is determined as the starting moment; at some later time, the photographer or the first user makes a specified gesture, and the time is determined as the end time.
And a substep C2: in response to the second input, audio data corresponding to the first user is acquired for a target time period corresponding to the second input.
In this step, the period between the start time and the end time is a target period.
Audio data corresponding to the first user is acquired within the target time period.
In the present embodiment, during shooting, the start time information and the end time information may be respectively determined by the second input so as to acquire audio data corresponding to the first user within a target time period between the two pieces of time information. Therefore, based on the embodiment, the time period for acquiring the audio data corresponding to the first user is distinguished from the time period for normal shooting, so that the audio data can be acquired in a targeted manner, and irrelevant content in the shot video caused by the first user replying a message can be avoided, thereby ensuring the shooting quality.
In a flow of a message processing method according to another embodiment of the present application, the method further includes:
step D1: and acquiring a target frame image corresponding to the target time information according to the start time information, wherein the target time information is the last time information of the start time information.
Step D2: and acquiring a first image corresponding to a first user in the target frame image.
And D3: and acquiring a first video corresponding to the target time period based on the shooting preview interface.
Step D4: the first user in the first video is replaced according to the first image.
In the embodiment, in order to ensure that the shooting is not interrupted, before the shot video output after the shooting is shot, a shot target frame image before the audio data acquisition is started is acquired to extract a first image corresponding to a first user in the frame image; and acquiring each frame image of the first video corresponding to the target time period so as to replace the first user in each frame image of the first video with the first image.
Optionally, the target frame image is a frame image corresponding to a time immediately before the start time of the target time period.
Alternatively, if the shooting is not finished after the target time period, a frame image corresponding to a time next to the finishing time of the target time period may be used as the target frame image.
Alternatively, when the first user merely replies the message in a language expression manner without adding the limb movement, the replacement in the embodiment is limited to the replacement of the face part, so that the original limb movement of the first user can be reserved.
And further, outputting the complete video shot at this time after the replacement is finished.
In the embodiment, in order to ensure that the shooting is not interrupted, a human face replacement mode can be adopted, and in the finally output shot video, the video content generated based on the message reply of the first user is adjusted, so that the finally output shot video content is consistent, and the high quality of the video is ensured.
In a flow of a message processing method according to another embodiment of the present application, the method further includes:
step E1: and acquiring a second video based on the shooting preview interface.
And the second video does not comprise the corresponding video data in the target time period.
Optionally, in this embodiment, during the video shooting, if the video data corresponding to the first user needs to be acquired, the video shooting is interrupted.
For example, start acquiring video data corresponding to the first user, and pause shooting; further, the acquisition of the video data corresponding to the first user is ended, and the shooting is started.
Further, the videos shot before and after the pause are output as the complete video shot at this time.
Optionally, the two pieces of video are manually stitched together by the user as the final captured video.
In this embodiment, although the shooting process is interrupted, a method for quickly replying a message in a shooting scene may be provided to shorten the time taken for replying a message as much as possible, thereby avoiding the waste of time.
In the flow of the message processing method according to another embodiment of the present application, step 140 includes:
substep F1: and under the condition that the target session is logged in by the second user information and the first message is received, sending the second message to the target session where the first message is located based on the second user information.
The second user and the first user corresponding to the second user information are the same members participating in the target session; or the second user and the first user corresponding to the second user information are different members participating in the target session.
For example, in one scenario, the first user and the second user are the same user, that is, in the shooting process, the person to be shot receives the first message related to the person to be shot, and then the person to be shot can directly reply to the first message with the mood of the person to be shot.
Optionally, in this scenario, the target session is a friend session; alternatively, the target session is a certain group session.
For another example, in another scenario, the first user and the second user are not the same user, that is, in the shooting process, the second user receives the first message, and the first message is related to the shot first user, and after the audio data corresponding to the first user is acquired, the second user sends a reply message.
Optionally, in this scenario, the target session is a group session, and the first user, the second user, and the user sending the first message are all members participating in the target session.
In this embodiment, as a party receiving the first message, the content replied by the party may be sent to the target session, and the content replied by another person may also be sent to the target session, so that delay of shooting time due to different persons searching for their own electronic devices in a shooting scene is avoided.
In the flow of the message processing method according to another embodiment of the present application, after step 140, the method further includes:
step G1: and displaying the message content of the second message and displaying at least one of third user information corresponding to the first user, fourth user information for sending the second message and the referenced first message in a target session interface corresponding to the target session.
Optionally, the third user information includes a user account name of the chat application, a user account number of the chat application, a user account avatar of the chat application.
Optionally, the fourth user information includes a user account name of the chat application, a user account number of the chat application, and a user account avatar of the chat application.
Optionally, logging in the chat application with fourth user information.
For example, in one scenario, if the first user is a native user for shooting, and the third user information is the same as the fourth user information, the message content of the second message and the third user information or the fourth user information of the second message are displayed in the target conversation. Further, the second message is replied based on the first message, and therefore, the existing reference relationship of the second message is also displayed. For example, below the second message, the first message is displayed in a referenced format.
For another example, in one scenario, the first user is not the local user for shooting, and both the first user and the local user for shooting are members participating in the target session. Based on such a scenario, in order to clearly describe each message, each user, and the relationship between the message and the user, first, fourth user information, that is, user information for sending the second message is displayed in addition to the message content for displaying the second message; secondly, displaying third user information for prompting that the message is replied by the first user; thirdly, displaying the existing reference relation of the second message.
Referring to fig. 6, the second message is shown to include two parts, one part is shown in the message box 601 and includes the content of the message and the content of "reply from page three", which is used to explain the third user information; another portion is shown below the message box 601, including the referenced first message 602. In addition, a user avatar of the message to be sent is displayed in the message box 601 to explain the fourth user information.
In this embodiment, a quick reply of a message can be realized during the shooting process, and it is inconvenient to describe the reply scene in the target conversation, so in order to describe a clearly replied message, in addition to displaying the message content of the second message, third user information corresponding to the first user, fourth user information for sending the second message, and the referred first message and the like can be displayed to illustrate the relationship between the reply message and the referred message, and the relationship between the reply message and each user, so that each member participating in the target conversation can be clear at a glance.
In summary, an object of the present invention is to provide a method for quickly replying chat messages during shooting. First, the shooting process may not be interrupted; secondly, a new interactive mode is provided, and the audio data of the shot person can be acquired in the shooting process for replying; thirdly, the operation is simple and the recovery is fast.
In the message processing method provided by the embodiment of the present application, the execution subject may be a message processing apparatus. The message processing apparatus provided in the embodiment of the present application is described with an example in which a message processing apparatus executes a message processing method.
Fig. 7 is a block diagram of a message processing apparatus according to another embodiment of the present application, including:
a receiving module 10, configured to receive a first message when a shooting preview interface is displayed;
a first obtaining module 20, configured to obtain, when a first user associated with the first message is included in the shooting preview interface, audio data corresponding to the first user;
a generating module 30 for generating a second message based on the audio data;
and the sending module 40 is configured to send the second message to the target session where the first message is located.
In this way, in the embodiment of the application, under the condition that the shooting preview interface is displayed, the shooting preview interface displays a scene being shot, if the first message is received and is associated with the first user in the shooting preview interface, the first user is directly shot by using the convenience of the first user included in the shooting scene so as to acquire the audio data corresponding to the first user, and then the second message for reply is generated according to the audio data and is sent to the target conversation where the first message is located. Therefore, based on the embodiment of the application, the first user can be informed of the need of replying the message in the shooting process, so that the first user can directly carry out oral reply without finding the electronic equipment and then replying, and the waste of shooting time caused by searching the own equipment by people in a shooting scene is avoided.
Optionally, the first obtaining module 20 includes:
a first determining unit, configured to determine, in a case where the first message includes first user information, a first face image that matches the first account image in the shooting preview interface, according to the first account image corresponding to the first user information, the first face image corresponding to the first user;
a first receiving unit, configured to receive a first input to a first user in a first message and shooting preview interface;
and a second determination unit for determining the first user in the photographing preview interface in response to the first input.
Optionally, the generating module 30 includes:
a third determination unit for determining the audio data as the content of the second message;
a fourth determination unit configured to determine, as the content of the second message, video content corresponding to the audio data in a case where the audio data is from the video data;
and a fifth determining unit, configured to determine the text content corresponding to the audio data as the content of the second message.
Optionally, the first obtaining module 20 includes:
a second receiving unit for receiving a second input for determining start time information and end time information of acquiring the audio data;
an acquisition unit configured to acquire, in response to a second input, audio data corresponding to the first user within a target time period corresponding to the second input.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring a target frame image corresponding to target time information according to the starting time information, wherein the target time information is the last time information of the starting time information;
the third acquisition module is used for acquiring a first image corresponding to a first user in the target frame image;
the fourth acquisition module is used for acquiring a first video corresponding to the target time period based on the shooting preview interface;
and the replacing module is used for replacing the first user in the first video according to the first image.
Optionally, the apparatus further comprises:
the fifth acquisition module is used for acquiring a third video based on the shooting preview interface;
wherein, in the third video, the corresponding video data in the target time period is not included.
Optionally, the sending module 40 includes:
the sending unit is used for sending the second message to the target session where the first message is located based on the second user information under the condition that the target session is logged in by the second user information and the first message is received;
the second user and the first user corresponding to the second user information are the same members participating in the target session; or the second user and the first user corresponding to the second user information are different members participating in the target session.
Optionally, the apparatus further comprises:
and the display module is used for displaying the message content of the second message in a target session interface corresponding to the target session, and displaying at least one of third user information corresponding to the first user, fourth user information for sending the second message and the quoted first message.
The message processing apparatus in the embodiment of the present application may be an electronic device, or may be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (NAS), a Television (TV), an assistant, a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The message processing apparatus according to the embodiment of the present application may be an apparatus having an action system. The action system may be an Android (Android) action system, an ios action system, or other possible action systems, and the embodiment of the present application is not particularly limited.
The message processing apparatus provided in the embodiment of the present application can implement each process implemented by the foregoing method embodiment, and is not described here again to avoid repetition.
Optionally, as shown in fig. 8, an electronic device 100 is further provided in the embodiment of the present application, and includes a processor 101, a memory 102, and a program or an instruction stored in the memory 102 and capable of running on the processor 101, where the program or the instruction is executed by the processor 101 to implement each step of any one of the foregoing message processing method embodiments, and can achieve the same technical effect, and is not described herein again to avoid repetition.
It should be noted that the electronic device according to the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The processor 1010 is configured to receive a first message when the shooting preview interface is displayed; under the condition that a first user associated with the first message is included in the shooting preview interface, audio data corresponding to the first user are obtained; generating a second message based on the audio data; and sending the second message to the target session of the first message.
In this way, in the embodiment of the application, under the condition that the shooting preview interface is displayed, the shooting preview interface displays a scene being shot, if the first message is received and is associated with the first user in the shooting preview interface, the first user is directly shot by using the convenience of the first user included in the shooting scene so as to acquire the audio data corresponding to the first user, and then the second message for reply is generated according to the audio data and is sent to the target conversation where the first message is located. Therefore, based on the embodiment of the application, the first user can be informed of the message reply requirement in the shooting process, so that the first user can directly reply orally without finding the electronic equipment and then replying, and the waste of shooting time caused by searching the own equipment by people in a shooting scene is avoided.
Optionally, the processor 1010 is further configured to, if the first message includes first user information, determine, in the shooting preview interface, a first face image matching a first account image corresponding to the first user information according to the first account image corresponding to the first user information, where the first face image corresponds to the first user; a user receiving unit 1007 configured to receive a first input to the first user in the first message and the photographing preview interface; a processor 1010 further configured to determine the first user in the capture preview interface in response to the first input.
Optionally, a processor 1010, further configured to determine the audio data as content of the second message; determining video content corresponding to the audio data as content of the second message in case the audio data is from video data; and determining the text content corresponding to the audio data as the content of the second message.
Optionally, the user receiving unit 1007 is further configured to receive a second input, where the second input is used to determine start time information and end time information for acquiring the audio data; the processor 1010 is further configured to, in response to the second input, obtain audio data corresponding to the first user during a target time period corresponding to the second input.
Optionally, the processor 1010 is further configured to obtain, according to the start time information, a target frame image corresponding to target time information, where the target time information is previous time information of the start time information; acquiring a first image corresponding to the first user in the target frame image; acquiring a first video corresponding to the target time period based on the shooting preview interface; replacing the first user in the first video according to the first image.
Optionally, the processor 1010 is further configured to obtain a third video based on the shooting preview interface; wherein, in the third video, the corresponding video data in the target time period is not included.
Optionally, the processor 1010 is further configured to, when logging in the target session with second user information and receiving the first message, send the second message to the target session where the first message is located based on the second user information; the second user corresponding to the second user information and the first user are the same members participating in the target session; or the second user corresponding to the second user information and the first user are different members participating in the target session.
Optionally, the display unit 1006 is configured to display, in a target conversation interface corresponding to the target conversation, the message content of the second message, and at least one of third user information corresponding to the first user, fourth user information for sending the second message, and the referenced first message.
In summary, an object of the present application is to provide a method for quickly replying chat messages during shooting. First, the shooting process may not be interrupted; secondly, a new interactive mode is provided, and the audio data of the shot person can be acquired in the shooting process for replying; thirdly, the operation is simple and the recovery is fast.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of a still picture or a video image obtained by an image capturing device (such as a camera) in a video image capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and an action stick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to applications and action systems. The processor 1010 may integrate an application processor, which primarily handles motion systems, user pages, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first storage area for storing a program or an instruction and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, and the like) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 1009 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor, which primarily handles operations related to the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned message processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above-mentioned message processing method embodiment, and can achieve the same technical effect, and is not described here again to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes in the foregoing message processing method embodiments, and can achieve the same technical effects, and in order to avoid repetition, details are not described here again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. A method of message processing, the method comprising:
receiving a first message under the condition that a shooting preview interface is displayed;
acquiring audio data corresponding to a first user under the condition that the shooting preview interface comprises the first user associated with the first message;
generating a second message based on the audio data;
and sending the second message to the target session of the first message.
2. The method of claim 1, wherein, in the event that a first user associated with the first message is included in the shooting preview interface, obtaining audio data corresponding to the first user comprises:
determining a first face image matched with a first account image in the shooting preview interface according to the first account image corresponding to the first user information under the condition that the first message comprises the first user information, wherein the first face image corresponds to the first user;
or,
receiving a first input to the first message and the first user in the capture preview interface;
in response to the first input, determining the first user in the capture preview interface.
3. The method of claim 1, wherein generating the second message based on the audio data comprises at least any one of:
determining the audio data as content of the second message;
determining video content corresponding to the audio data as content of the second message in case the audio data is from video data;
and determining the text content corresponding to the audio data as the content of the second message.
4. The method of claim 1, wherein the obtaining audio data corresponding to the first user comprises:
receiving a second input for determining to acquire start time information and end time information of the audio data;
in response to the second input, audio data corresponding to the first user is acquired for a target time period corresponding to the second input.
5. The method of claim 4, further comprising:
acquiring a target frame image corresponding to target time information according to the starting time information, wherein the target time information is the last time information of the starting time information;
acquiring a first image corresponding to the first user in the target frame image;
acquiring a first video corresponding to the target time period based on the shooting preview interface;
replacing the first user in the first video according to the first image.
6. The method of claim 4, further comprising:
acquiring a third video based on the shooting preview interface;
wherein, in the third video, the corresponding video data in the target time period is not included.
7. The method of claim 1, wherein sending the second message to the target session in which the first message is located comprises:
under the condition that second user information is used for logging in the target session and the first message is received, sending the second message to the target session where the first message is located based on the second user information;
the second user corresponding to the second user information and the first user are the same members participating in the target session; or the second user corresponding to the second user information and the first user are different members participating in the target session.
8. The method of claim 1, wherein after sending the second message to the target session in which the first message is located, the method further comprises:
displaying the message content of the second message, and displaying at least one of third user information corresponding to the first user, fourth user information for sending the second message, and the referenced first message in a target session interface corresponding to the target session.
9. A message processing apparatus, characterized in that the apparatus comprises:
the receiving module is used for receiving a first message under the condition of displaying a shooting preview interface;
the first obtaining module is used for obtaining audio data corresponding to a first user under the condition that the shooting preview interface comprises the first user related to the first message;
a generating module for generating a second message based on the audio data;
and the sending module is used for sending the second message to the target session in which the first message is positioned.
10. An electronic device comprising a processor and a memory, the memory storing a program or instructions executable on the processor, the program or instructions when executed by the processor implementing the steps of the message processing method as claimed in any one of claims 1 to 8.
11. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the message processing method according to any one of claims 1 to 8.
CN202211253942.1A 2022-10-13 2022-10-13 Message processing method and device, electronic equipment and readable storage medium Pending CN115623321A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211253942.1A CN115623321A (en) 2022-10-13 2022-10-13 Message processing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211253942.1A CN115623321A (en) 2022-10-13 2022-10-13 Message processing method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115623321A true CN115623321A (en) 2023-01-17

Family

ID=84862924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211253942.1A Pending CN115623321A (en) 2022-10-13 2022-10-13 Message processing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115623321A (en)

Similar Documents

Publication Publication Date Title
CN106506335B (en) The method and device of sharing video frequency file
JP6986187B2 (en) Person identification methods, devices, electronic devices, storage media, and programs
WO2022198934A1 (en) Method and apparatus for generating video synchronized to beat of music
CN105404401A (en) Input processing method, apparatus and device
CN112653902A (en) Speaker recognition method and device and electronic equipment
US20230231973A1 (en) Streaming data processing for hybrid online meetings
CN113992994A (en) Bullet screen display method and device, electronic equipment and storage medium
CN110019897B (en) Method and device for displaying picture
CN112948704A (en) Model training method and device for information recommendation, electronic equipment and medium
CN111818382B (en) Screen recording method and device and electronic equipment
CN111835617B (en) User head portrait adjusting method and device and electronic equipment
US20150029297A1 (en) Data Processing Method And Electronic Device
CN105653623B (en) Picture collection method and device
CN115623133A (en) Online conference method and device, electronic equipment and readable storage medium
CN115623321A (en) Message processing method and device, electronic equipment and readable storage medium
CN111585865A (en) Data processing method, data processing device, computer readable storage medium and computer equipment
CN112565913B (en) Video call method and device and electronic equipment
CN114629869B (en) Information generation method, device, electronic equipment and storage medium
CN115052107B (en) Shooting method, shooting device, electronic equipment and medium
CN111857467B (en) File processing method and electronic equipment
CN112462992B (en) Information processing method and device, electronic equipment and medium
CN110838291B (en) Input method and device and electronic equipment
CN117675997A (en) Information input method and device
CN113569031A (en) Information interaction method and device, electronic equipment and storage medium
CN118433520A (en) Shooting method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination