CN112261317A - Video generation method and device, electronic equipment and computer readable storage medium - Google Patents

Video generation method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN112261317A
CN112261317A CN202011147009.7A CN202011147009A CN112261317A CN 112261317 A CN112261317 A CN 112261317A CN 202011147009 A CN202011147009 A CN 202011147009A CN 112261317 A CN112261317 A CN 112261317A
Authority
CN
China
Prior art keywords
video
content information
terminal
composite
receiving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011147009.7A
Other languages
Chinese (zh)
Other versions
CN112261317B (en
Inventor
张伟
宫昀
张聪
熊征宇
易琳凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202011147009.7A priority Critical patent/CN112261317B/en
Publication of CN112261317A publication Critical patent/CN112261317A/en
Application granted granted Critical
Publication of CN112261317B publication Critical patent/CN112261317B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the disclosure discloses a video generation method, a video generation device, electronic equipment and a computer-readable storage medium. The video generation method comprises the following steps: receiving and playing a first video; displaying the candidate content information; acquiring selected content information from the candidate content information in response to receiving a content selection signal; generating a composite video according to the selected content information and the first video; and the composite video comprises a display effect indicating whether the selected content information corresponds to the first video. The method solves the technical problem of insufficient diversity of video interaction through the operation of the video.

Description

Video generation method and device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of video processing, and in particular, to a video generation method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the rapid development of information technology, the mobile internet technology has also advanced dramatically. Whether the emergence of intelligent devices or the arrival of the 5G era or the application of technologies such as big data, AI intelligence and algorithms, flying wings are inserted into the electronic mobile devices. In real life, the social communication is diversified due to the technologies, especially, the smart phone breaks through the space-time limitation of the daily communication of human beings, is a comprehensive handheld mobile device really integrating massive information, network audio-visual, leisure and entertainment and the like, and meets the daily information demand and the social communication of people.
In the aspect of interaction, the form of information dissemination gradually develops from characters, expressions, pictures, voice and the like to video. Along with the fire heat of the live broadcast market, the mobile social short video is also different from the military, and becomes a novel carrier for people to exchange interaction and spread information. The user can approve, comment, share and forward the short videos published by other users through self-creation and video simulation or watching, so that interactive communication is achieved and the requirements are met.
However, the existing interaction mode can only realize the above functions, and cannot conveniently use the video itself to interact among users, that is, the user can only perform the above interaction operation on the video, but cannot operate the video itself to complete the interaction requirement.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In order to solve the above technical problem, the embodiments of the present disclosure propose the following technical solutions.
In a first aspect, an embodiment of the present disclosure provides a video generation method, including:
receiving and playing a first video;
displaying the candidate content information;
acquiring selected content information from the candidate content information in response to receiving a content selection signal;
generating a composite video according to the selected content information and the first video; and the composite video comprises a display effect indicating whether the selected content information corresponds to the first video.
In a second aspect, an embodiment of the present disclosure provides a video generating apparatus, including:
the video receiving module is used for receiving and playing a first video;
the content display module is used for displaying the candidate content information;
a content information acquisition module for acquiring content information selected from the candidate content information in response to receiving a content selection signal;
the synthesis module is used for generating a synthesized video according to the selected content information and the first video; and the composite video comprises a display effect indicating whether the selected content information corresponds to the first video.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the preceding first aspects.
In a fourth aspect, the present disclosure provides a non-transitory computer-readable storage medium, wherein the non-transitory computer-readable storage medium stores computer instructions for causing a computer to perform the method of any one of the foregoing first aspects.
The embodiment of the disclosure discloses a video generation method, a video generation device, electronic equipment and a computer-readable storage medium. The video generation method comprises the following steps: receiving and playing a first video; displaying the candidate content information; acquiring selected content information from the candidate content information in response to receiving a content selection signal; generating a composite video according to the selected content information and the first video; and the composite video comprises a display effect indicating whether the selected content information corresponds to the first video. The method solves the technical problem of insufficient diversity of video interaction through the operation of the video.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a video generation method provided in an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a specific implementation of a processing result generated in a video generation method according to an embodiment of the present disclosure;
fig. 3 is a schematic flow chart of a video generation method provided by an embodiment of the present disclosure;
fig. 4 is a schematic flowchart illustrating an embodiment of a video generation method according to another embodiment;
fig. 5 is a schematic view of an application scenario of a video generation method provided by the embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an embodiment of a video generating apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an embodiment of a video generating apparatus according to another embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Fig. 1 is a flowchart of an embodiment of a video generation method provided in an embodiment of the present disclosure, where the video generation method provided in this embodiment may be executed by a video generation apparatus, and the video generation apparatus may be implemented as software, or implemented as a combination of software and hardware, and the video generation apparatus may be integrated in some device in a video generation system, such as a video generation server or a video generation terminal device. As shown in fig. 1, the method comprises the steps of:
step S101, receiving and playing a first video;
in this step, a first video is received and played in a first terminal, where the first terminal is a terminal used by a first user, and illustratively, the first terminal includes a first client, and the first client is used for interaction between users.
Optionally, the first video is a video generated in another terminal and sent by another user to the user of the first terminal for interaction.
Optionally, the first video corresponds to the first content and is a video shot by the user of the other terminal according to the first content.
Optionally, the step S101 includes:
step S201, responding to the received first notification message, displaying a first prompt message; wherein the first notification message at least comprises a first video identifier and a content identifier;
step S202, responding to the received selection signal of the first prompt message, receiving and playing the first video from the server according to the identification of the first video.
In this optional embodiment, the first video is a video generated by shooting in the second terminal, after the first video is generated, the user of the second terminal selects the user of the first terminal to which the user wants to send, and generates a first notification message notifying the user of the first terminal so that the user of the first terminal can select whether to watch the first video. Optionally, the second terminal sends the generated first video to a server, generates the first notification message, and sends the first notification message to the first terminal. The first notification message at least comprises an identifier of a first video and a content identifier, wherein the identifier of the first video is used for uniquely identifying the first video in the server, and the content identifier is identical to the content for identifying the first video.
In the optional embodiment, when the first terminal receives the first notification message, a first prompt message is displayed on the first terminal, and the first prompt message is used for prompting a user to watch the first video. Illustratively, the first prompting message includes a prompting display interface, and the prompting display interface includes two selection buttons, wherein the first selection button indicates that the user watches the video, and the second selection button indicates that the user does not watch the video.
And receiving and playing the first video in response to receiving the selection signal of the first prompt message. Illustratively, when the user selects the first selection button, the first terminal acquires the identifier of the first video from the first notification message, receives the first video from the server through the identifier of the first video, and plays the first video.
Optionally, the first video is obtained by displaying a shooting interface on an interface of the second terminal for shooting after the second terminal displays content information selected from candidate content information displayed on the interface of the second terminal by the second user. Wherein, the displaying the shooting interface on the interface of the second terminal for shooting comprises:
step S301, shooting a first video, wherein the first video corresponds to first content;
step S302, generating a first notification message according to the first video and the first content;
step S303, sending the first notification message to the first terminal.
Optionally, the steps in this embodiment are performed by a second terminal different from the first terminal, in step S301, a first shooting interface of the first video is displayed, and in response to receiving a shooting instruction, the second terminal shoots the first video through the image sensing device, where the first video corresponds to the first content information; optionally, the first content information includes a subject of the first video or an event in the first video, and the like, and exemplarily, the first content information is playing football, basketball, and the like.
Optionally, in step S302, a first video identifier is generated for the first video; and if the first content information is the self-defined content, generating a first content identifier for the first content information, and if the first content is the preset content, acquiring the first content identifier of the first content information. And taking the first video identification and the first content identification as parameters of the first notification message to generate the first notification message.
In step S303, the first notification message is sent to the first terminal according to the first terminal or the user of the first terminal selected by the user of the second terminal. Optionally, before or after sending the first notification message, the second terminal further sends the first video and the first content information to the server, so that the first terminal can obtain the first video and the first content information from the server. Wherein the identifier of the first video and the identifier of the first content information are stored together in an associated manner to indicate that the content of the first video is the first content.
Before the step S301, the method further includes: and the second user selects the content information from the candidate content information displayed on the interface of the second terminal. The steps further include:
step S401, displaying candidate content information;
and step S402, responding to the received selection signal of the candidate content information, and displaying a first shooting interface.
In step S401, candidate content information is displayed in the second terminal, and optionally, the candidate content information is a plurality of content information under the same theme, for example, the theme is sports, and the candidate content information includes: playing football, basketball, badminton and table tennis. The candidate content information is provided for a user of the second terminal to select so as to shoot a first video of the corresponding content.
In the step S402, the user of the second terminal selects one of the candidate content information as the content to be shot, and the second terminal receives the selection signal of the candidate content information, and as can be understood, the selection signal of the candidate content information is generated by a human-computer interaction interface, for example, the selection signal is generated by clicking a touch screen, and when the selection signal is received, the first shooting interface is displayed in the second terminal. Optionally, the first shooting interface includes at least one function button, where the function button includes a shooting button, an effect button, and the like, where the shooting button controls the start and the end of shooting, and the effect button is used to add various special effects to the first video.
Optionally, after the second terminal receives and plays the second video sent by the third terminal, the first video is obtained by displaying a shooting interface on a display interface of the second terminal for shooting. Specifically, before the step S301, the method further includes: step S501, receiving and playing a second video;
and step S502, responding to the completion of the playing of the second video, and displaying a first shooting interface.
In step S501, the second terminal receives the second video from the third terminal, where a process of receiving the second video by the second terminal is the same as a process of receiving the first video by the first terminal, and details can refer to the description of receiving the first video by the first terminal, which is not repeated herein.
In the step S502, when the second video is played, a first shooting interface is displayed. And the second video playing is finished when the second video is played to the end or the playing of the second video is interrupted according to the user operation of a second terminal. The first shooting interface is the same as the first shooting interface in step S402, and specific details may refer to the description in step S402, which is not described herein again.
The above steps S401 to S402 describe the process in which the second terminal generates the first video as the first generating terminal of the video. The above steps S501-S502 describe the process in which the second terminal generates the first video as an intermediate generation terminal of videos.
Step S102, displaying candidate content information;
optionally, after receiving and playing the first video, displaying candidate content information. Illustratively, after the first video is played, displaying the candidate content information on the first terminal; or displaying the candidate content information on the first terminal according to the operation of the user, for example, displaying the candidate content information on the first terminal after the user closes the first video. Wherein each of the candidate content information indicates a content related to the video, wherein the candidate content information is the same as or partially the same as the candidate content information displayed in the second terminal in the above step S401. For example, in step S401, the candidate content information is football, basketball, badminton, and table tennis, and in step S102, the candidate content information is the same as in step S401.
Optionally, the step S102 includes:
acquiring and displaying the candidate content information from the server according to the content identification
In this embodiment, the content id is an id of the subject matter composed of the candidate content information, such as a ball game id composed of playing football, basketball, badminton, and table tennis. Optionally, the content identifier is a preset identifier, and content information included in the content identifier is also preset, in this step, after the first terminal obtains the content identifier, candidate content information corresponding to the content identifier is obtained from a server, and the candidate content information is displayed in the first terminal for a user of the first terminal to select.
Or the content identifier is an identifier of the candidate content information, that is, each content information in the candidate content information has an independent identifier, the content identifier is a set of identifiers of content information in the candidate content information, and at this time, the candidate content information is obtained from the server according to the identifier of the candidate content information and is displayed in the first terminal.
Step S103, responding to the received content selection signal, and acquiring the selected content information from the candidate content information;
optionally, the content selection signal is generated by a user through a human-computer interface, and if the user selects one of the candidate content information, the first terminal receives the content selection signal.
Optionally, content information in the candidate content information is displayed in the first terminal, the candidate content information has a corresponding display position, and the content selection signal has a selection position, so that the content information selected from the candidate content information can be acquired according to a correspondence between the selection position and the display position.
Step S104, generating a composite video according to the selected content information and the first video; and the composite video comprises a display effect indicating whether the selected content information corresponds to the first video.
Optionally, the selected content information has a content identifier, and whether the selected content information corresponds to the first video is determined by determining whether the content identifier of the selected content information is the same as the identifier of the first content of the first video.
Optionally, the composite video includes a display effect indicating whether the selected content information corresponds to the first video. Optionally, the display effect includes a first display effect or a second display effect. Illustratively, the first presentation effect identifies that the selected content information corresponds to the first video, and the second presentation effect identifies that the selected content information does not correspond to the first video. Optionally, the first display effect or the second display effect is the preset video or a preset special effect, and when the composite video is generated, the first display effect or the second display effect is added to a compositing process to generate the composite video.
Optionally, the step S103 includes:
responding to the selected content information corresponding to the first video, and generating the composite video according to the first video, wherein the composite video comprises the first display effect;
and in response to the selected content information not corresponding to the first video, generating the composite video according to the first video, wherein the composite video comprises the second display effect.
As described above, optionally, it is determined whether the selected content information corresponds to the first video according to the identifier of the selected content information and the identifier of the first content of the first video. And if the selected content information is the same as the identifier of the first content, the selected content information corresponds to the first video, otherwise, the selected content information does not correspond to the first video.
The above steps correspond to the situation that only the first video exists, at this time, the first video is generated by the second terminal and is received and played by the first terminal, and the user of the first terminal triggers the first terminal to generate the selection signal of the candidate content information according to the content of the watched first video. And the first terminal automatically generates a composite video by the first video and the first display effect or the second display effect according to the identifier of the selected content information in a preset mode. Illustratively, the first presentation effect is a congratulatory video, which indicates that the selected content information corresponds to the first video; the second presentation effect is an encouraging video, indicating that the selected content information does not correspond to the first video. In this alternative embodiment, the first presentation effect or the second presentation effect is composited with the first video to generate the composite video.
Optionally, the step S103 includes:
responding to the selected content information corresponding to the first video, and generating the composite video according to the first video and the second video, wherein the composite video comprises the first display effect;
and in response to the selected content information not corresponding to the first video, generating the composite video according to the first video and the second video, wherein the composite video comprises the second display effect.
The steps correspond to the situation at least comprising a first video and a second video, wherein the first video is generated by a second terminal, the second video is generated by a third terminal, the third terminal is a first terminal for generating the videos, the third terminal generates the second video and sends the second video to the second terminal, a user of the second terminal shoots and generates the first video after watching the second video and sends the first video to the first terminal, the first video is received and played by the first terminal, and the user of the first terminal triggers the first terminal to generate a selection signal of candidate content information according to the content of the watched first video. And the first terminal automatically generates a composite video by the first video and the first display effect or the second display effect according to the identifier of the selected content information in a preset mode. Illustratively, the first presentation effect is a congratulatory video, which indicates that the selected content information corresponds to the first video; the second presentation effect is an encouraging video, indicating that the selected content information does not correspond to the first video. In this alternative embodiment, the first presentation effect or the second presentation effect is composited with the first video and the second video to generate the composite video.
Optionally, the generating the composite video according to the first video and the second video includes:
scaling the first video and the second video to generate the composite video;
and the zoomed first video and the zoomed second video are displayed in a playing interface of the composite video in parallel.
Optionally, the first notification message further includes a second video identifier, and the second video is obtained through the following steps:
and acquiring the second video from a server according to the second video identifier.
In this optional embodiment, the first notification message further includes a second video identifier in addition to the first video identifier, and when the video is played, only the first video is obtained through the server, and when the video needs to be synthesized, the second video also needs to be obtained from the server through the second video identifier.
In this alternative embodiment, in order to reduce the length of the composite video, the first video and the second video are scaled, and the scaled first video and the scaled second video are placed in the same interface in the composite video, that is, the first video and the second video are two video channels of the composite video, so that the length of the composite video is the same as the length of the first video and the second video, but not the sum of the lengths of the first video and the second video, which can effectively reduce the length of the composite video.
Fig. 6 is a schematic view of an application scenario according to an embodiment of the present disclosure. As shown in fig. 6, in the application scenario, a plurality of terminal devices 601 and 603 and a server 604 are included, where the plurality of terminal devices and the server perform data interaction through a wired or wireless link. As shown in fig. 6, a video interaction process is shown, wherein a user of a terminal device 601 selects a video content to capture a second video, and then transfers the second video to a next terminal 602, a user of the terminal 602 captures a first video by imitating the video after watching the second video, and then transfers the first video to a next terminal 603, and a user of the terminal 603 guesses the content of the video by using options after finishing watching the video, and then generates and plays a composite video after scaling the first video and the second video on the terminal 603, and indicates whether the user of the terminal 603 selects a correct content by showing an effect in the composite video. As shown in fig. 6, an interface 60101 is an interface where the terminal 601 photographs the second video, an interface 60201 is an interface where the terminal 602 photographs the first video, and an interface 60301 is an interface where the terminal 603 plays a composite video of the first video and the second video. A complete process of the application scene is as follows, a user of the terminal 601 initiates a video interaction, selects a football as the content of the shot video, then the terminal 601 enters a shooting interface, the user performs shooting and special effect setting on the interface, and after the shooting is finished, a second video ID is generated and stored together with the content ID of the football; the user of the terminal 601 sends the second video to the terminal 602 through a private letter, and sends the second video to the server 604 for storage. After receiving the private message, the terminal 602 displays a prompt interface to inquire whether a user of the terminal 602 participates in video interaction, if the user of the terminal 602 participates, the terminal 602 acquires a second video from the server 604 through a second video ID and plays the second video, and then the user of the terminal 602 enters a shooting interface to shoot a first video; after the first video is generated after shooting, sending a private message to a user of the terminal 603, storing the ID of the first video, the ID of the second video and the ID of the content kicking the football on a server, and storing the first video on the server; the user of terminal 603, after receiving the private letter, also displays a prompt interface on terminal 603, asking the user of terminal 603 whether to participate in the video interaction, if the user of the terminal 602 participates, a content information selection interface is displayed, the user of the terminal 603 guesses the content to be expressed by the first video according to the first video, then the video generation process is started, the terminal 603 acquires the first video and the second video from the server according to the ID of the first video and the ID of the second video, judging whether the user of the terminal 603 guesses the right through the content ID, then obtaining the corresponding display effect, zooming and synthesizing the first video and the second video, and synthesizing the display effect into the first video and the second video to generate a synthesized video, and playing the synthesized video on the terminal 603, wherein the user of the terminal 603 can judge whether to guess the content of the first video and the second video according to the display effect in the synthesized video.
The embodiment of the disclosure discloses a video generation method, which comprises the following steps: receiving and playing a first video; displaying the candidate content information; acquiring selected content information from the candidate content information in response to receiving a content selection signal; generating a composite video according to the selected content information and the first video; and the composite video comprises a display effect indicating whether the selected content information corresponds to the first video. The method solves the technical problem of insufficient diversity of video interaction through the operation of the video.
In the above, although the steps in the above method embodiments are described in the above sequence, it should be clear to those skilled in the art that the steps in the embodiments of the present disclosure are not necessarily performed in the above sequence, and may also be performed in other sequences such as reverse, parallel, and cross, and further, on the basis of the above steps, other steps may also be added by those skilled in the art, and these obvious modifications or equivalents should also be included in the protection scope of the present disclosure, and are not described herein again.
Fig. 7 is a schematic structural diagram of an embodiment of a video generating apparatus provided in a first terminal according to an embodiment of the present disclosure, and as shown in fig. 7, the apparatus 700 includes: a video receiving module 701, a content display module 702, and a composition module 704. Wherein the content of the first and second substances,
a video receiving module 701, configured to receive and play a first video;
a content display module 702, configured to display candidate content information;
a content information obtaining module 703, configured to, in response to receiving a content selection signal, obtain content information selected from the candidate content information;
a synthesizing module 704, configured to generate a synthesized video according to the selected content information and the first video;
and the composite video comprises a display effect indicating whether the selected content information corresponds to the first video. Further, the video receiving module 701 is further configured to:
in response to receiving the first notification message, displaying a first prompt message; wherein the first notification message at least comprises a first video identifier and a content identifier;
and in response to receiving the selection signal of the first prompt message, receiving and playing the first video from the server according to the identification of the first video.
Further, the content display module 702 is further configured to:
and acquiring and displaying the candidate content information from the server according to the content identification.
Further, after the first video is shot by the second terminal according to content information selected by the second user from the candidate content information displayed on the interface of the second terminal, a shooting interface is displayed on the interface of the second terminal for shooting to obtain the first video
Further, after the first video is received and played by the second terminal and the second video sent by the third terminal, a shooting interface is displayed on a display interface of the second terminal for shooting to obtain the first video
Further, the synthesizing module 704 is further configured to:
responding to the selected content information corresponding to the first video, and generating the composite video according to the first video, wherein the composite video comprises the first display effect;
and in response to the selected content information not corresponding to the first video, generating the composite video according to the first video, wherein the composite video comprises the second display effect.
Further, the synthesizing module 704 is further configured to:
responding to the selected content information corresponding to the first video, and generating the composite video according to the first video and the second video, wherein the composite video comprises the first display effect;
and in response to the selected content information not corresponding to the first video, generating the composite video according to the first video and the second video, wherein the composite video comprises the second display effect.
Further, the synthesizing module 704 is further configured to:
scaling the first video and the second video to generate the composite video;
and the zoomed first video and the zoomed second video are displayed in a playing interface of the composite video in parallel.
Further, the first notification message further includes a second video identifier, where the second video identifier is obtained by the following steps:
and acquiring the second video from a server according to the second video identifier.
The apparatus shown in fig. 7 can perform the method of the embodiment shown in fig. 1-5, and the detailed description of this embodiment can refer to the related description of the embodiment shown in fig. 1-5. The implementation process and technical effect of the technical solution refer to the descriptions in the embodiments shown in fig. 1 to 5, and are not described herein again.
Referring now to FIG. 8, shown is a schematic diagram of an electronic device 800 suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 8, an electronic device 800 may include a processing means (e.g., central processing unit, graphics processor, etc.) 801 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage means 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the electronic apparatus 800 are also stored. The processing apparatus 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Generally, the following devices may be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 807 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage 808 including, for example, magnetic tape, hard disk, etc.; and a communication device 809. The communication means 809 may allow the electronic device 800 to communicate wirelessly or by wire with other devices to exchange data. While fig. 8 illustrates an electronic device 800 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 809, or installed from the storage means 808, or installed from the ROM 802. The computer program, when executed by the processing apparatus 801, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring an input image and a first text; extracting the features of the input image to obtain a feature vector of the input image; coding the first text to obtain a feature vector of the first text; obtaining a combined feature vector according to the feature vector of the input image and the feature vector of the first text; and decoding the joint feature vector to generate a second text.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided a video generation method including:
receiving and playing a first video;
displaying the candidate content information;
acquiring selected content information from the candidate content information in response to receiving a content selection signal;
generating a composite video according to the selected content information and the first video; and the composite video comprises a display effect indicating whether the selected content information corresponds to the first video.
Further, the receiving and playing the first video includes:
in response to receiving the first notification message, displaying a first prompt message; wherein the first notification message at least comprises a first video identifier and a content identifier;
and in response to receiving the selection signal of the first prompt message, receiving and playing the first video from the server according to the identification of the first video.
Further, the displaying the candidate content information includes:
and acquiring and displaying the candidate content information from the server according to the content identification.
Further, the first video is obtained by displaying a shooting interface on an interface of the second terminal for shooting after the second terminal displays the content information selected from the candidate content information displayed on the interface of the second terminal by the second user.
Further, after the second video sent by the third terminal is received and played by the second terminal, the first video is obtained by displaying a shooting interface on a display interface of the second terminal for shooting.
Further, the generating a composite video according to the selected content information and the first video includes:
responding to the selected content information corresponding to the first video, and generating the composite video according to the first video, wherein the composite video comprises the first display effect;
and in response to the selected content information not corresponding to the first video, generating the composite video according to the first video, wherein the composite video comprises the second display effect.
Further, the generating a composite video according to the selected content information and the first video includes:
responding to the selected content information corresponding to the first video, and generating the composite video according to the first video and the second video, wherein the composite video comprises the first display effect;
and in response to the selected content information not corresponding to the first video, generating the composite video according to the first video and the second video, wherein the composite video comprises the second display effect.
Further, the generating the composite video according to the first video and the second video includes:
scaling the first video and the second video to generate the composite video;
and the zoomed first video and the zoomed second video are displayed in a playing interface of the composite video in parallel.
Further, the first notification message further includes a second video identifier, where the second video identifier is obtained by the following steps:
and acquiring the second video from a server according to the second video identifier.
According to one or more embodiments of the present disclosure, there is provided a video generating apparatus including:
the video receiving module is used for receiving and playing a first video;
the content display module is used for displaying the candidate content information;
a content information acquisition module for acquiring content information selected from the candidate content information in response to receiving a content selection signal;
the synthesis module is used for generating a synthesized video according to the selected content information and the first video;
and the composite video comprises a display effect indicating whether the selected content information corresponds to the first video. Further, the video receiving module is further configured to:
in response to receiving the first notification message, displaying a first prompt message; wherein the first notification message at least comprises a first video identifier and a content identifier;
and in response to receiving the selection signal of the first prompt message, receiving and playing the first video from the server according to the identification of the first video.
Further, the content display module is further configured to:
and acquiring and displaying the candidate content information from the server according to the content identification.
Further, after the first video is shot by the second terminal according to content information selected by the second user from the candidate content information displayed on the interface of the second terminal, a shooting interface is displayed on the interface of the second terminal for shooting to obtain the first video
Further, after the first video is received and played by the second terminal and the second video sent by the third terminal, a shooting interface is displayed on a display interface of the second terminal for shooting to obtain the first video
Further, the synthesis module is further configured to:
responding to the selected content information corresponding to the first video, and generating the composite video according to the first video, wherein the composite video comprises the first display effect;
and in response to the selected content information not corresponding to the first video, generating the composite video according to the first video, wherein the composite video comprises the second display effect.
Further, the synthesis module is further configured to:
responding to the selected content information corresponding to the first video, and generating the composite video according to the first video and the second video, wherein the composite video comprises the first display effect;
and in response to the selected content information not corresponding to the first video, generating the composite video according to the first video and the second video, wherein the composite video comprises the second display effect.
Further, the synthesis module is further configured to:
scaling the first video and the second video to generate the composite video;
and the zoomed first video and the zoomed second video are displayed in a playing interface of the composite video in parallel.
Further, the first notification message further includes a second video identifier, where the second video identifier is obtained by the following steps:
and acquiring the second video from a server according to the second video identifier.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the video generation methods described above.
According to one or more embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium characterized by storing computer instructions for causing a computer to execute any of the video generation methods described above.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (12)

1. A video generation method, comprising, in a first terminal:
receiving and playing a first video;
displaying the candidate content information;
acquiring selected content information from the candidate content information in response to receiving a content selection signal;
generating a composite video according to the selected content information and the first video; and the composite video comprises a display effect indicating whether the selected content information corresponds to the first video.
2. The video generation method of claim 1, wherein said receiving and playing the first video comprises:
in response to receiving the first notification message, displaying a first prompt message; wherein the first notification message at least comprises a first video identifier and a content identifier;
and in response to receiving the selection signal of the first prompt message, receiving and playing the first video from the server according to the identification of the first video.
3. The video generation method of claim 2, wherein the displaying candidate content information comprises:
and acquiring and displaying the candidate content information from the server according to the content identification.
4. The video generation method according to claim 1, wherein the first video is obtained by the second terminal by displaying a shooting interface on an interface of the second terminal for shooting after the second terminal according to content information selected by the second user from the candidate content information displayed on the interface of the second terminal.
5. The video generation method according to claim 1, wherein the first video is obtained by displaying a shooting interface on a display interface of a second terminal for shooting after the second terminal receives and plays a second video sent by a third terminal.
6. The video generation method of claim 1, wherein the generating a composite video from the selected content information and the first video comprises:
responding to the selected content information corresponding to the first video, and generating the composite video according to the first video, wherein the composite video comprises a first display effect;
and in response to the selected content information not corresponding to the first video, generating the composite video according to the first video, wherein the composite video comprises a second display effect.
7. The video generation method of any of claims 1 or 2, wherein the generating a composite video from the selected content information and the first video comprises:
responding to the selected content information corresponding to the first video, and generating the composite video according to the first video and the second video, wherein the composite video comprises a first display effect;
and responding to the selected content information not corresponding to the first video, and generating the composite video according to the first video and the second video, wherein the composite video comprises a second display effect.
8. The video generation method of claim 7, wherein the generating the composite video from the first and second videos comprises:
scaling the first video and the second video to generate the composite video;
and the zoomed first video and the zoomed second video are displayed in a playing interface of the composite video in parallel.
9. The video generation method of claim 7, wherein the first notification message further includes a second video identifier, wherein the second video is obtained by:
and acquiring the second video from a server according to the second video identifier.
10. A video generation apparatus provided in a first terminal, comprising:
the video receiving module is used for receiving and playing a first video;
the content display module is used for displaying the candidate content information;
a content information acquisition module for acquiring content information selected from the candidate content information in response to receiving a content selection signal;
the synthesis module is used for generating a synthesized video according to the selected content information and the first video; and the composite video comprises a display effect indicating whether the selected content information corresponds to the first video.
11. An electronic device, comprising:
a memory for storing computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when executed implements the method of any of claims 1-9.
12. A non-transitory computer readable storage medium storing computer readable instructions which, when executed by a computer, cause the computer to perform the method of any one of claims 1-9.
CN202011147009.7A 2020-10-23 2020-10-23 Video generation method and device, electronic equipment and computer readable storage medium Active CN112261317B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011147009.7A CN112261317B (en) 2020-10-23 2020-10-23 Video generation method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011147009.7A CN112261317B (en) 2020-10-23 2020-10-23 Video generation method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112261317A true CN112261317A (en) 2021-01-22
CN112261317B CN112261317B (en) 2022-09-09

Family

ID=74263540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011147009.7A Active CN112261317B (en) 2020-10-23 2020-10-23 Video generation method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112261317B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107596689A (en) * 2017-09-07 2018-01-19 广州华多网络科技有限公司 A kind of question and answer mode interaction control method, apparatus and system
US20180070143A1 (en) * 2016-09-02 2018-03-08 Sony Corporation System and method for optimized and efficient interactive experience
CN109660873A (en) * 2018-11-02 2019-04-19 北京达佳互联信息技术有限公司 Exchange method, interactive device and computer readable storage medium based on video
CN110417728A (en) * 2019-06-10 2019-11-05 北京字节跳动网络技术有限公司 A kind of online interaction method, apparatus, medium and electronic equipment
CN111405381A (en) * 2020-04-17 2020-07-10 深圳市即构科技有限公司 Online video playing method, electronic device and computer readable storage medium
CN111436000A (en) * 2019-01-12 2020-07-21 北京字节跳动网络技术有限公司 Method, device, equipment and storage medium for displaying information on video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180070143A1 (en) * 2016-09-02 2018-03-08 Sony Corporation System and method for optimized and efficient interactive experience
CN107596689A (en) * 2017-09-07 2018-01-19 广州华多网络科技有限公司 A kind of question and answer mode interaction control method, apparatus and system
CN109660873A (en) * 2018-11-02 2019-04-19 北京达佳互联信息技术有限公司 Exchange method, interactive device and computer readable storage medium based on video
CN111436000A (en) * 2019-01-12 2020-07-21 北京字节跳动网络技术有限公司 Method, device, equipment and storage medium for displaying information on video
CN110417728A (en) * 2019-06-10 2019-11-05 北京字节跳动网络技术有限公司 A kind of online interaction method, apparatus, medium and electronic equipment
CN111405381A (en) * 2020-04-17 2020-07-10 深圳市即构科技有限公司 Online video playing method, electronic device and computer readable storage medium

Also Published As

Publication number Publication date
CN112261317B (en) 2022-09-09

Similar Documents

Publication Publication Date Title
CN110519611B (en) Live broadcast interaction method and device, electronic equipment and storage medium
CN106254311B (en) Live broadcast method and device and live broadcast data stream display method and device
CN106534757B (en) Face exchange method and device, anchor terminal and audience terminal
CN105704504B (en) Method, device, equipment and storage medium for inserting push information in live video
CN113411642B (en) Screen projection method and device, electronic equipment and storage medium
EP4047490A1 (en) Video-based interaction realization method and apparatus, device and medium
JP7443621B2 (en) Video interaction methods, devices, electronic devices and storage media
CN113225483B (en) Image fusion method and device, electronic equipment and storage medium
CN112337100B (en) Live broadcast-based data processing method and device, electronic equipment and readable medium
CN111526411A (en) Video processing method, device, equipment and medium
CN114390308B (en) Interface display method, device, equipment, medium and product in live broadcast process
CN112337101A (en) Live broadcast-based data interaction method and device, electronic equipment and readable medium
CN114727146B (en) Information processing method, device, equipment and storage medium
CN112073821A (en) Information prompting method and device, electronic equipment and computer readable medium
CN112337104A (en) Live broadcast data processing method and device, electronic equipment and readable medium
CN111935442A (en) Information display method and device and electronic equipment
CN114173139A (en) Live broadcast interaction method, system and related device
CN112312163A (en) Video generation method and device, electronic equipment and storage medium
CN112261317B (en) Video generation method and device, electronic equipment and computer readable storage medium
CN115174946B (en) Live page display method, device, equipment, storage medium and program product
CN116089700A (en) Searching method, searching device, electronic equipment and storage medium
CN115225948A (en) Live broadcast room interaction method, device, equipment and medium
CN110166825B (en) Video data processing method and device and video playing method and device
CN113318437A (en) Interaction method, device, equipment and medium
CN112163237A (en) Data processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant