WO2023045856A1 - Procédé et appareil de traitement d'informations, et dispositif électronique et support - Google Patents

Procédé et appareil de traitement d'informations, et dispositif électronique et support Download PDF

Info

Publication number
WO2023045856A1
WO2023045856A1 PCT/CN2022/119576 CN2022119576W WO2023045856A1 WO 2023045856 A1 WO2023045856 A1 WO 2023045856A1 CN 2022119576 W CN2022119576 W CN 2022119576W WO 2023045856 A1 WO2023045856 A1 WO 2023045856A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
information
frame
message
image sequence
Prior art date
Application number
PCT/CN2022/119576
Other languages
English (en)
Chinese (zh)
Inventor
滕孝军
Original Assignee
维沃移动通信(杭州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 维沃移动通信(杭州)有限公司 filed Critical 维沃移动通信(杭州)有限公司
Publication of WO2023045856A1 publication Critical patent/WO2023045856A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages

Definitions

  • the present application belongs to the technical field of information processing, and more specifically, relates to an information processing method, device, electronic equipment, and medium.
  • the purpose of the embodiment of the present application is to provide an information processing method, device, electronic device and medium, which can solve the problem of complicated operations for replying information when watching a video.
  • the embodiment of the present application provides an information processing method, which includes:
  • the first prompt message includes first video information, and the content of the first video information is determined according to the content of the first communication message.
  • an information processing device which includes:
  • a receiving module configured to receive the first communication message when the first video is played
  • An output module configured to output a first prompt message in the playing window of the first video, the first prompt message includes first video information, and the content of the first video information is based on the first communication message The content is determined.
  • an embodiment of the present application provides an electronic device, the electronic device includes a processor, a memory, and a program or instruction stored in the memory and operable on the processor, the program or instruction being The processor implements the steps of the method described in the first aspect when executed.
  • an embodiment of the present application provides a readable storage medium, on which a program or an instruction is stored, and when the program or instruction is executed by a processor, the steps of the method described in the first aspect are implemented .
  • the embodiment of the present application provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions, so as to implement the first aspect the method described.
  • the first prompt message when the first communication message is received during the process of watching the first video, the first prompt message will be output in the play window of the first video, because the first prompt message includes the first A video information, and the content of the first video information is determined according to the content of the first communication message, so that the user does not need to switch to the first communication message if the user receives the first communication message while watching the first video
  • the application program corresponding to a communication message can directly view the first communication message through the first video information, and the user can know the content of the first communication message without any operation, which improves the user's viewing experience.
  • FIG. 1 is a schematic flowchart of an information processing method provided in an embodiment of the present application
  • Figure 2a- Figure 2c are schematic diagrams of the interface display of the electronic device provided by the embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of an information processing device provided in an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an electronic device provided by another embodiment of the present application.
  • FIG. 1 is a flowchart of an information processing method provided by an embodiment of the present application.
  • the method can be applied to an electronic device, and the electronic device can be a mobile phone, a tablet computer, a notebook computer, and the like.
  • the method may include steps 1100 to 1200, which will be described in detail below.
  • Step 1100 in the case of playing the first video, receive a first communication message.
  • the first video is a video currently being played by the user using a video playing application program.
  • the video can be a short video, TV series, movie, etc.
  • the first communication message is a message sent by the first account to the second account registered in the electronic device.
  • the first account and the second account may both be different accounts logged into the same social software application.
  • the second account is an account for logging into the social software application 1 in the electronic device
  • the first account is an account for logging in the social software application 1 in another electronic device.
  • the first account and the second account may both be different mobile phone numbers.
  • the second account is the number of the SIM card inserted in the electronic device
  • the first account is the number of the SIM card inserted in another electronic device.
  • the user is using a video playing application to play the first video
  • the frame image played by the first video is the image shown in FIG. 2a.
  • a message "XX, please go home for dinner!” is received from the first account logged in the social software application 1 in another electronic device to the second account logged in the social software application 1 in the electronic device.
  • "XX, please go home for dinner!” is the first communication message.
  • the first communication message also includes the application information of the social software application 1, for example, it may be the application information of the social software application 1. Name, so that users can reply messages to the first account later.
  • Step 1200 in the playing window of the first video, output a first prompt message.
  • the first prompt message includes first video information, and the content of the first video information is determined according to the content of the first communication message.
  • the first video information may include first text information, first voice information and a first interpolated image sequence.
  • the contents of the first text information and the first voice information are determined according to the contents of the first communication message, and the first frame insertion image sequence includes at least one video frame of the first video.
  • the first video information is a short video inserted in the first video.
  • the first communication message can be analyzed to generate first text information and first voice information corresponding to the first communication message.
  • first text information and first voice information corresponding to the first communication message for the convenience of distinguishing the text information and voice information corresponding to the first communication message from the text information and voice information input by the subsequent user. It refers to the text information and voice information corresponding to the first communication message as the first text information and the first voice information, respectively, and refers to the text information and voice information input by the subsequent user as the second text information and the second voice information.
  • the above first text information may be used as subtitle information of the generated first video information.
  • the above first voice information may be used as voice information of the generated first video information.
  • the first communication message includes "XX, please go home for dinner!”
  • the first text message “XX, please go home for dinner!”
  • the first Voice message “XX, please go home for dinner!”.
  • a first prompt message is output in the playback window of the first video It may further include: playing the first voice information and the first interpolated image sequence in the playing window of the first video, and displaying the first text information.
  • the first communication message includes "XX, please go home for dinner!, here, the first voice message "XX, please go home for dinner!” can be played in the play window of the first video and the first interrupt Frame image sequence, one of the inserted frame images in the first frame inserted image sequence may be as shown in Figure 2b, and displays the first text message "XX, please go home for dinner!”.
  • the playback window of the first video it not only plays the first interpolated image sequence and the first voice information, but also displays the first text information, so that the user can know the The content of the first newsletter improves the user's viewing experience.
  • the information processing method of the present disclosure may further include the following steps: A video frame is inserted into the first interpolated image sequence, and the first video frame is a video frame displayed in the play window when the first communication message is received.
  • the first frame interpolation image sequence includes at least one frame interpolation image. And, the first frame interpolation image sequence is located after the first video frame of the first video.
  • the first frame interpolation image sequence may be an image sequence calculated by the intelligent frame interpolation algorithm of the first video frame and the second video frame of the first video, wherein the second video frame is the first video in the first video The next video frame of the frame.
  • an intermediate video frame sequence between the first video frame and the second video frame may be calculated by using the optical flow method as the first frame interpolation image sequence.
  • an intermediate video frame sequence between the first video frame and the second video frame may also be calculated by using the optical flow method combined with deep learning as the first frame interpolation image sequence.
  • other intelligent frame insertion algorithms may also be used to calculate the first frame insertion image sequence, which is not limited in this embodiment.
  • the first frame interpolation image sequence can also be other image sequences that have nothing to do with the first video.
  • the first frame interpolation image sequence can be played in the playback window in the form of picture-in-picture, or in the playback window. A new floating window is added for playing, etc.
  • This embodiment does not specifically limit the playing form of the first inserted frame image sequence.
  • the first frame-inserted image sequence may be inserted between any two video frames of the first video, and after the first video information is played, the first video may continue to be played from the first video frame.
  • the video frame displayed in the play window is the video frame shown in FIG. 2a when the first communication message "XX, please go home and eat! is received, here, the video frame shown in FIG.
  • FIG. 2c is a video frame next to the first video frame in the first video, and here, the video frame shown in FIG. 2c may be used as the second video frame.
  • the intermediate video frame sequence between the first video frame and the second video frame can be calculated based on the optical flow method as the first frame interpolation image sequence, as shown in FIG. frame image.
  • this step when receiving the first communication message, it inserts the first interpolation image sequence after the video frame displayed in the playback window, and generates the first video information based on the first interpolation image sequence, so that in the first video
  • the object in the first video can read the first communication message through lines, without switching to the corresponding application program, and the user can also realize the content of the first communication message in time.
  • the information processing method of the present disclosure may further include the following step 2100: the first text information, The first voice information is aligned with the time stamp of the first interpolated image sequence.
  • the step 2100 of aligning the first text information, the first voice information and the time stamp of the first interpolated image sequence may further include the following steps 2110-2130:
  • Step 2110 identify the target object in the first video frame.
  • the target object is an object in a speaking state when playing the first video frame in the first video.
  • the first communication message may identify a target number of video frames before the first video frame played by the first video when the first communication message is received, so as to determine the first The speaking object in the video frame is used as the target object.
  • the target quantity may be a numerical value set according to actual application scenarios and actual requirements.
  • the first video frame played in the first video is the video frame shown in Figure 2a, and the person in the speaking state in the first video frame
  • the object is a female student, and here, the female student is used as the target object in the first video frame.
  • the corresponding playing time of the first video may be: 1:00 minutes.
  • Step 2120 acquire the first time stamp showing the first video frame, the second time stamp showing the subtitle information corresponding to the first video frame, and the third time stamp when the target object speaks.
  • the first time stamp is a time stamp showing the first video frame.
  • the second time stamp is a time stamp for displaying subtitle information corresponding to the first video frame.
  • the third time stamp is the time stamp when the target object speaks.
  • the electronic device in order to synchronously display the video frame, subtitle information and voice information in the first video, the electronic device usually checks the time stamp corresponding to the video frame, the time stamp corresponding to the subtitle information, and the time stamp corresponding to the voice information.
  • the time stamps are aligned, that is, the time stamps are processed synchronously, so that video frames, subtitle information and voice information are displayed synchronously to play the first video.
  • Step 2130 obtain the fourth time stamp showing the first interpolated frame image sequence according to the first time stamp, obtain the fifth time stamp showing the first text information according to the second time stamp, and determine the time stamp of the first voice information according to the third time stamp the sixth timestamp, and align the fourth timestamp, the fifth timestamp, and the sixth timestamp.
  • step 2130 firstly, according to the first time stamp, determine the fourth time stamp for displaying the first interpolated image sequence; according to the second time stamp, determine the fifth time stamp for displaying the first text information; and according to the third time stamp A time stamp is used to determine the sixth time stamp of the first voice information. Further, the fourth time stamp, the fifth time stamp and the sixth time stamp are aligned, that is, the time stamp is synchronized, so that the first text information and the first voice information are added to the first interpolated image sequence to generate the first video information.
  • the fourth time stamp is a time stamp for displaying the first frame interpolation image sequence.
  • the fifth time stamp is a display time stamp for displaying the first text information.
  • the sixth time stamp is the time stamp of playing the first voice information.
  • the fourth time stamp, the fifth time stamp and the sixth time stamp are subjected to time stamp synchronization processing, and the first video information can be generated according to the first text information, the first voice information and the first interpolated image sequence.
  • voice changing processing may be performed so that the timbre of the first voice information is the same as the timbre of a certain object in the first video.
  • the object in the speaking state in the first video frame (Fig. 2a) displayed in the first video is the girl in the first video frame , and, what the girl said was "Scared!, corresponding to the playing time of the first video: 1:00.
  • time stamp synchronization processing is performed according to the fourth time stamp, the fifth time stamp and the sixth time stamp, so as to generate the first video information from the first interpolated frame image sequence, the first text information and the first voice information, as shown in the figure
  • the subtitle information of the first video information is "XX, please go home for dinner!”
  • the girl's speech in the first video information is "XX, please go home for dinner!”
  • the first video information The playing time point can be 1:09 minutes.
  • the first video information is generated according to the first interpolated frame image sequence, the first text information and the first voice information. Since the first text information and the first voice information correspond to the first communication message, the user can efficiently and intuitively to view the first communication message.
  • the first communication message when receiving the first communication message, it will output the first prompt message in the play window of the first video, because the first prompt message includes the first video message information, and the content of the first video information is determined according to the content of the first communication message, so that the user does not need to switch to the first communication message if he receives the first communication message while watching the first video
  • the application program corresponding to the message can directly view the first communication message through the first video information, and the user can know the content of the first communication message without any operation, which improves the user's viewing experience.
  • the information processing method of the embodiment of the present disclosure may further include: starting from the second video frame, continuing Play the first video.
  • the second video frame is a video frame next to the first video frame in the first video.
  • the second video frame is the video frame shown in Figure 2c, and the playing time of the second video frame before the first prompt information is output is, for example, 1:04 minutes.
  • the playing time of the first video information is 1:09, here, when the first prompt information is output, the corresponding playing time of the second video frame After the point is extended to 1:09, the boy said in the second video frame is "don't be afraid".
  • the playback of the first video will be resumed, thereby linking the reading of the user message with the display effect of the video, and bringing the user an immersive viewing experience.
  • the information processing method of the embodiment of the present disclosure further includes the following steps 3100-3400:
  • Step 3100 receiving the first input from the user.
  • the first input may be: the user's click input on the playback window of the first video, or a voice command input by the user, or a specific gesture input by the user, which may be specifically determined according to actual usage requirements. Not limited.
  • the specific gesture in the embodiment of the present application may be any one of a click gesture, a sliding gesture, a drag gesture, a pressure recognition gesture, a long press gesture, an area change gesture, a double-press gesture, and a double-click gesture;
  • the click input can be single-click input, double-click input, or any number of click inputs, etc., and can also be long-press input or short-press input.
  • receiving the user's first input in step 3100 may further include: receiving the user's first input to the play window.
  • the electronic device after outputting the first prompt information, the electronic device will continue to play the first video starting from the second video frame.
  • the user performs a touch input on the playback window of the first video, and the touch input Can be a double-click input.
  • the second video frame shown in FIG. 2c is played in the play window of the first video, at this time, the user can double-click the girl in the second video frame.
  • Step 3200 generating second video information in response to the first input.
  • generating the second video information may further include the following steps 3210 to 3240:
  • Step 3210 in response to the first input, display a message reply window.
  • the message reply window includes a text input box and a voice input box, text information can be input through the text input box, and voice information can be input through the voice input box.
  • a dialog box can pop up on the second video frame as the message reply window.
  • Step 3220 receiving a second input from the user on the message reply window.
  • the second input can be: the user's click input on the message reply window, or a voice command input by the user, or a specific gesture input by the user, which can be determined according to actual usage requirements, and is not limited in this embodiment of the present application.
  • the specific gesture in the embodiment of the present application may be any one of a click gesture, a sliding gesture, a drag gesture, a pressure recognition gesture, a long press gesture, an area change gesture, a double-press gesture, and a double-click gesture;
  • the click input can be single-click input, double-click input, or any number of click inputs, etc., and can also be long-press input or short-press input.
  • a dialog box pops up on the second video frame, the dialog box includes a text input box and a voice input box, for example, the text "OK" can be input in the text input box, and by long pressing the voice input box to enter Voice "OK".
  • Step 3230 in response to the second input, determine second text information and second voice information.
  • the electronic device may determine the word “okay” input through the text input box as the second text information, and determine the voice "okay” input through the voice input box as the second voice information.
  • Step 3240 Synthesize the second text information, the second voice information and the second interpolated image sequence to obtain the second video information.
  • the second interpolated image sequence includes at least one video frame of the first video.
  • the second frame insertion image sequence includes at least one frame insertion image. And, the second frame interpolation image sequence is located after the second video frame of the first video.
  • the second frame insertion image sequence is an image sequence calculated by an intelligent frame insertion algorithm from the second video frame and the third video frame of the first video.
  • the third video frame is the next video frame of the second video frame in the first video.
  • an intermediate video frame sequence between the second video frame and the third video frame may be calculated by using the optical flow method as the second frame interpolation image sequence.
  • an intermediate video frame sequence between the second video frame and the third video frame may also be calculated by using the optical flow method combined with deep learning as the second frame interpolation image sequence.
  • other intelligent frame insertion algorithms may also be used to calculate the second frame insertion image sequence, which is not limited in this embodiment.
  • it may first obtain the timestamp corresponding to the second video frame, the timestamp corresponding to the subtitle information matched by the second video frame, and the voice timestamp of the target object's speech, and determine according to the timestamp corresponding to the second video frame The timestamp corresponding to the second interpolation image sequence.
  • the time stamp corresponding to the second text information is determined according to the time stamp corresponding to the subtitle information matched by the second video frame, and the time stamp corresponding to the second voice information is determined according to the time stamp of the voice spoken by the target object.
  • the time stamp corresponding to the second interpolated image sequence, the time stamp corresponding to the second text information, and the time stamp corresponding to the second voice information are subjected to time stamp synchronization processing, and the second text information, the second voice information Combining with the second interpolation image sequence, converting the second text information into subtitle information and adding it to the subtitle channel, adding the second voice information to the audio channel, and then generating second video information with the second interpolation image sequence.
  • only the second frame insertion image sequence and corresponding subtitle information may be generated, or only the second frame insertion image sequence and corresponding second voice information may be generated.
  • a custom message reply method which can reply the inserted frame video to the communication object of the first communication message, and the communication object can directly view the message replied by the user through the second video information, and add a message
  • the fun of interaction also increases the immersive experience of video viewers, without switching out of the video for message interaction.
  • Step 3300 sending the second video information to the communication object of the first communication message.
  • the communication object of the first communication message may be the first account logged into the social software application program 1 in another electronic device.
  • the communication object of the first communication message may also be the number of the SIM card inserted in another electronic device.
  • the subtitle information of the second video information is "OK"
  • the girl's speech in the second video information is " OK”
  • this embodiment supports generating second video information in the first video, and sending the second video information to the communication object of the first communication message, and the communication object can directly check the user's reply through the second video information messages to increase the fun of message interaction.
  • the information processing method provided in the embodiment of the present application may be executed by an information processing device, or a control module in the information processing device for executing the information processing method.
  • the information processing device provided in the embodiment of the present application is described by taking the information processing device executing the information processing method as an example.
  • the embodiment of the present application also provides an information processing device 300, including:
  • the receiving module 310 is configured to receive the first communication message when the first video is played.
  • the prompt module 320 is configured to output a first prompt message in the playing window of the first video, the first prompt message includes first video information, and the content of the first video information is based on the first communication The content of the message is determined.
  • the first prompt message when the first communication message is received during the process of watching the first video, the first prompt message will be output in the play window of the first video, because the first prompt message includes the first A video information, and the content of the first video information is determined according to the content of the first communication message, so that the user does not need to switch to the first communication message if the user receives the first communication message while watching the first video
  • the application program corresponding to a communication message can directly view the first communication message through the first video information, and the user can know the content of the first communication message without any operation, which improves the user's viewing experience.
  • the prompt module 320 is specifically configured to: In the play window, play the first voice information and the first interpolated image sequence, and display the first text information; wherein, the content of the first text information and the first voice information is based on the first As determined by the content of a communication message, the first frame interpolation image sequence includes at least one video frame of the first video.
  • the playback window of the first video it not only plays the first interpolated image sequence and the first voice information, but also displays the first text information, without switching to the corresponding application program, it can also realize user Know the content of the first newsletter in a timely manner.
  • the device 300 further includes an inserting module, which is configured to, in the play window of the first video, insert a message after the first video frame of the first video before outputting the first prompt message In the first frame insertion image sequence, the first video frame is a video frame displayed in the play window when the first communication message is received.
  • the object in the first video when receiving the first communication message, it inserts the first interpolation frame image sequence after the video frame displayed in the play window, and generates the first video information based on the first interpolation frame image sequence, so that at the In a video, through the form of intelligent video frame insertion, the object in the first video can read the first communication message through lines, without switching to the corresponding application program, and the user can also know the content of the first communication message in time .
  • the device 300 further includes an alignment module, which is configured to align the first text information, the first voice information, and the Align with the time stamp of the first interpolated image sequence.
  • the first video information is generated according to the first frame interpolation image sequence, the first text information and the first voice information. Since the first text information and the first voice information correspond to the first communication message, the user The first communication message can be viewed efficiently and intuitively, and the fun of information interaction can also be increased.
  • the apparatus 300 further includes a generating module and a sending module.
  • the receiving module 310 is further configured to receive a first input from the user.
  • the generating module is used to generate second video information.
  • the sending module is used to send the second video information to the communication object of the first communication message.
  • this application supports generating second video information in the first video, and sending the second video information to the communication object of the first communication message, and the communication object can directly view it through the second video information
  • the receiving module 310 is specifically configured to: receive a user's first input on the playing window.
  • the generating module is specifically configured to: display a message reply window in response to the first input; receive a second input from the user on the message reply window; and determine the second text information and the second message in response to the second input.
  • Speech information synthesize the second text information, the second speech information, and a second frame-inserted image sequence to obtain the second video information, and the second frame-inserted image sequence includes at least one frame of the first frame A video frame of a video.
  • this application provides a self-defined message reply method, which can reply the frame-inserted video to the communication object of the first communication message, and the communication object can directly view the message replied by the user through the second video information , to increase the fun of message interaction, and also increase the immersive experience of video viewers, without switching out of the video for message interaction.
  • the information processing device in this embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal.
  • the device may be a mobile electronic device or a non-mobile electronic device.
  • the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a handheld computer, a vehicle electronic device, a wearable device, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook or a personal digital assistant (personal digital assistant).
  • non-mobile electronic devices can be servers, network attached storage (Network Attached Storage, NAS), personal computer (personal computer, PC), television (television, TV), teller machine or self-service machine, etc., this application Examples are not specifically limited.
  • Network Attached Storage NAS
  • personal computer personal computer, PC
  • television television
  • teller machine or self-service machine etc.
  • the information processing device in the embodiment of the present application may be a device with an operating system.
  • the operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, which are not specifically limited in this embodiment of the present application.
  • the information processing device provided in the embodiment of the present application can implement the various processes implemented in the foregoing method embodiments, and details are not repeated here to avoid repetition.
  • the embodiment of the present application further provides an electronic device 400, including a processor 401 and a memory 402, which are stored in the memory 402 and can be stored in the processor 401.
  • the program or instruction running on the computer when the program or instruction is executed by the processor 401, implements the various processes of the above information processing method embodiment, and can achieve the same technical effect. To avoid repetition, details are not repeated here.
  • the electronic devices in the embodiments of the present application include the above-mentioned mobile electronic devices and non-mobile electronic devices.
  • FIG. 5 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
  • the electronic device 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, and a processor 510, etc. part.
  • the electronic device 500 can also include a power supply (such as a battery) for supplying power to various components, and the power supply can be logically connected to the processor 510 through the power management system, so that the management of charging, discharging, and function can be realized through the power management system. Consumption management and other functions.
  • a power supply such as a battery
  • the structure of the electronic device shown in FIG. 5 does not constitute a limitation to the electronic device.
  • the electronic device may include more or fewer components than shown in the figure, or combine certain components, or arrange different components, and details will not be repeated here. .
  • the processor 510 is configured to receive a first communication message when the first video is played; the display unit 506 is configured to output a first prompt message in the playback window of the first video, and the first The prompt message includes first video information, and the content of the first video information is determined according to the content of the first communication message.
  • the display unit 506 is configured to play the first voice information and the first frame-inserted image sequence, and display the first text information; wherein, the content of the first text information and the first voice information is based on the Determined by the content of the first communication message, the first frame insertion image sequence includes at least one video frame of the first video.
  • the first communication message when it receives the first communication message during the process of watching the first video, it will output the first prompt message in the play window of the first video, because the first prompt message includes the first video information, and the content of the first video information is determined according to the content of the first communication message, so that the user does not need to switch to the first communication message if the user receives the first communication message while watching the first video
  • the application program corresponding to the communication message can directly view the first communication message through the first video information, and the user can know the content of the first communication message without any operation, which improves the user's viewing experience.
  • the processor 510 is configured to insert the first frame insertion image sequence after the first video frame of the first video, the first video frame is when the first communication message is received, The video frame displayed in the playback window.
  • the playback window of the first video it not only plays the first interpolated image sequence and the first voice information, but also displays the first text information, without switching to the corresponding application program, it can also realize user Know the content of the first newsletter in a timely manner.
  • the processor 510 is further configured to insert the first frame-inserted image sequence after the first video frame of the first video, the first video frame is when the first communication message is received , the video frame displayed in the playback window.
  • the processor 510 when receiving the first communication message, it inserts the first interpolation frame image sequence after the video frame displayed in the play window, and generates the first video information based on the first interpolation frame image sequence, so that at the In a video, through the form of intelligent video frame insertion, the object in the first video can read the first communication message through lines, without switching to the corresponding application program, and the user can also know the content of the first communication message in time .
  • the processor 510 is further configured to align the first text information, the first voice information, and the timestamp of the first interpolated image sequence.
  • the first video information is generated according to the first frame interpolation image sequence, the first text information and the first voice information. Since the first text information and the first voice information correspond to the first communication message, the user The first communication message can be viewed efficiently and intuitively.
  • the user input unit 507 is configured to receive a user's first input; the processor 510 is configured to generate second video information in response to the first input; and send the second video information to the The communication object of the first communication message.
  • this application supports generating second video information in the first video, and sending the second video information to the communication object of the first communication message, and the communication object can directly view it through the second video information
  • the user input unit 507 is configured to receive a user's first input to the playing window; the display unit 506 is configured to display a message reply window in response to the first input; the user input unit 507 is used to Receiving a second input from the user on the message reply window; the processor 510 is configured to determine second text information and second voice information in response to the second input; The second voice information is synthesized with a second frame-inserting image sequence to obtain the second video information, and the second frame-inserting image sequence includes at least one video frame of the first video.
  • the input unit 504 may include a graphics processor (Graphics Processing Unit, GPU) 5041 and a microphone 5042, and the graphics processor 5041 is used for the image capture device (such as the image data of the still picture or video obtained by the camera) for processing.
  • GPU Graphics Processing Unit
  • the display unit 506 may include a display panel 5061, and the display panel 5061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like.
  • the user input unit 507 includes a touch panel 5071 and other input devices 5072 .
  • the touch panel 5071 is also called a touch screen.
  • the touch panel 5071 may include two parts, a touch detection device and a touch controller.
  • Other input devices 5072 may include, but are not limited to, physical keyboards, function keys (such as volume control keys, switch keys, etc.), trackballs, mice, and joysticks, which will not be repeated here.
  • Memory 509 may be used to store software programs as well as various data, including but not limited to application programs and operating systems.
  • Processor 510 may integrate an application processor and a modem processor, wherein the application processor mainly processes operating systems, user interfaces, and application programs, and the modem processor mainly processes wireless communications. It can be understood that the foregoing modem processor may not be integrated into the processor 510 .
  • the embodiment of the present application also provides a readable storage medium.
  • the readable storage medium stores programs or instructions.
  • the program or instructions are executed by the processor, the various processes of the above-mentioned information processing method embodiments can be achieved, and the same To avoid repetition, the technical effects will not be repeated here.
  • the processor is the processor in the electronic device described in the above embodiments.
  • the readable storage medium includes computer readable storage medium, such as computer read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • the embodiment of the present application further provides a chip, the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is used to run programs or instructions to implement the above information processing method embodiment Each process can achieve the same technical effect, so in order to avoid repetition, it will not be repeated here.
  • chips mentioned in the embodiments of the present application may also be called system-on-chip, system-on-chip, system-on-a-chip, or system-on-a-chip.
  • the term “comprising”, “comprising” or any other variation thereof is intended to cover a non-exclusive inclusion such that a process, method, article or apparatus comprising a set of elements includes not only those elements, It also includes other elements not expressly listed, or elements inherent in the process, method, article, or device. Without further limitations, an element defined by the phrase “comprising a " does not preclude the presence of additional identical elements in the process, method, article, or apparatus comprising that element.
  • the scope of the methods and devices in the embodiments of the present application is not limited to performing functions in the order shown or discussed, and may also include performing functions in a substantially simultaneous manner or in reverse order according to the functions involved. Functions are performed, for example, the described methods may be performed in an order different from that described, and various steps may also be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

La présente demande se rapporte au domaine technique du traitement d'informations. Sont divulgués un procédé et un appareil de traitement d'informations, ainsi qu'un dispositif électronique et un support. Le procédé consiste à : lorsqu'une première vidéo est lue, recevoir un premier message de communication ; et émettre un premier message d'invite dans une fenêtre de lecture de la première vidéo, le premier message d'invite comprenant des premières informations vidéo, et le contenu des premières informations vidéo étant déterminé en fonction du contenu du premier message de communication.
PCT/CN2022/119576 2021-09-22 2022-09-19 Procédé et appareil de traitement d'informations, et dispositif électronique et support WO2023045856A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111118055.9 2021-09-22
CN202111118055.9A CN115103054B (zh) 2021-09-22 2021-09-22 信息处理方法、装置、电子设备及介质

Publications (1)

Publication Number Publication Date
WO2023045856A1 true WO2023045856A1 (fr) 2023-03-30

Family

ID=83287774

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/119576 WO2023045856A1 (fr) 2021-09-22 2022-09-19 Procédé et appareil de traitement d'informations, et dispositif électronique et support

Country Status (2)

Country Link
CN (1) CN115103054B (fr)
WO (1) WO2023045856A1 (fr)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075337A (zh) * 2009-11-20 2011-05-25 腾讯科技(深圳)有限公司 一种即时通信消息显示方法和相关装置
CN106550276A (zh) * 2015-09-22 2017-03-29 阿里巴巴集团控股有限公司 视频播放过程中多媒体信息的提供方法、装置和系统
CN107241655A (zh) * 2017-06-30 2017-10-10 广东欧珀移动通信有限公司 一种视频播放方法、装置、存储介质和终端
CN108733291A (zh) * 2018-04-12 2018-11-02 珠海格力电器股份有限公司 一种通知消息的处理方法及装置
CN109460174A (zh) * 2018-11-09 2019-03-12 维沃移动通信有限公司 一种信息处理方法及终端设备
WO2019174477A1 (fr) * 2018-03-12 2019-09-19 Oppo广东移动通信有限公司 Procédé et dispositif d'affichage d'interface utilisateur, et terminal
CN110830650A (zh) * 2019-10-30 2020-02-21 深圳传音控股股份有限公司 一种终端的提醒方法、终端及计算机存储介质
CN112637409A (zh) * 2020-12-21 2021-04-09 维沃移动通信有限公司 内容输出方法、装置和电子设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110944224B (zh) * 2019-11-29 2021-11-30 维沃移动通信有限公司 视频播放方法及电子设备
CN111416997B (zh) * 2020-03-31 2022-11-08 百度在线网络技术(北京)有限公司 视频播放方法、装置、电子设备和存储介质
CN112565868B (zh) * 2020-12-04 2022-12-06 维沃移动通信有限公司 视频播放方法、装置及电子设备
CN112866782A (zh) * 2020-12-30 2021-05-28 北京五八信息技术有限公司 视频播放方法、视频播放装置及电子设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075337A (zh) * 2009-11-20 2011-05-25 腾讯科技(深圳)有限公司 一种即时通信消息显示方法和相关装置
CN106550276A (zh) * 2015-09-22 2017-03-29 阿里巴巴集团控股有限公司 视频播放过程中多媒体信息的提供方法、装置和系统
CN107241655A (zh) * 2017-06-30 2017-10-10 广东欧珀移动通信有限公司 一种视频播放方法、装置、存储介质和终端
WO2019174477A1 (fr) * 2018-03-12 2019-09-19 Oppo广东移动通信有限公司 Procédé et dispositif d'affichage d'interface utilisateur, et terminal
CN108733291A (zh) * 2018-04-12 2018-11-02 珠海格力电器股份有限公司 一种通知消息的处理方法及装置
CN109460174A (zh) * 2018-11-09 2019-03-12 维沃移动通信有限公司 一种信息处理方法及终端设备
CN110830650A (zh) * 2019-10-30 2020-02-21 深圳传音控股股份有限公司 一种终端的提醒方法、终端及计算机存储介质
CN112637409A (zh) * 2020-12-21 2021-04-09 维沃移动通信有限公司 内容输出方法、装置和电子设备

Also Published As

Publication number Publication date
CN115103054B (zh) 2023-10-13
CN115103054A (zh) 2022-09-23

Similar Documents

Publication Publication Date Title
WO2020253760A1 (fr) Procédé d'entrée, dispositif électronique, et système de projection sur écran
US11405678B2 (en) Live streaming interactive method, apparatus, electronic device, server and storage medium
EP3989047A1 (fr) Procédé de commande vocale d'un appareil, et appareil électronique
CN114286173B (zh) 一种显示设备及音画参数调节方法
CN111541930B (zh) 直播画面的显示方法、装置、终端及存储介质
US11140315B2 (en) Method, storage medium, terminal device, and server for managing push information
WO2020211437A1 (fr) Procédé de diffusion sur écran, dispositif d'interaction sur plusieurs écrans et système
CN109729420A (zh) 图片处理方法及装置、移动终端及计算机可读存储介质
WO2017080145A1 (fr) Procédé et terminal de traitement d'informations, et support de stockage informatique
CN105609096A (zh) 文本数据输出方法和装置
US10685642B2 (en) Information processing method
WO2023125677A1 (fr) Circuit, procédé et appareil d'interpolation de trame graphique discrète, puce, dispositif électronique et support
WO2023030519A1 (fr) Procédé de traitement de projection d'écran et dispositif associé
CN112954046A (zh) 信息发送方法、信息发送装置和电子设备
US20220191556A1 (en) Method for processing live broadcast information, electronic device and storage medium
WO2022068721A1 (fr) Procédé et appareil de capture d'écran et dispositif électronique
US11818498B2 (en) Screen recording method and apparatus, and electronic device
CN110768961A (zh) 用于计算和娱乐装置的移动播放接收器
WO2023125316A1 (fr) Procédé et appareil de traitement vidéo, dispositif électronique et support
WO2023125553A1 (fr) Procédé et appareil d'interpolation de trame et dispositif électronique
WO2023045856A1 (fr) Procédé et appareil de traitement d'informations, et dispositif électronique et support
WO2023169361A1 (fr) Procédé et appareil de recommandation d'informations et dispositif électronique
WO2023066100A1 (fr) Procédé et appareil de partage de fichiers
WO2023109831A1 (fr) Procédé et appareil de traitement de messages et dispositif électronique
WO2023011300A1 (fr) Procédé et appareil pour enregistrer l'expression faciale d'un observateur vidéo

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22871902

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE