CN115022697A - Method for displaying video added with content element, electronic device and program product - Google Patents

Method for displaying video added with content element, electronic device and program product Download PDF

Info

Publication number
CN115022697A
CN115022697A CN202210460789.3A CN202210460789A CN115022697A CN 115022697 A CN115022697 A CN 115022697A CN 202210460789 A CN202210460789 A CN 202210460789A CN 115022697 A CN115022697 A CN 115022697A
Authority
CN
China
Prior art keywords
video
content
source video
source
elements
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210460789.3A
Other languages
Chinese (zh)
Inventor
宋键
车明明
曾宪明
石寅涛
钟步晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Holding Co Ltd
Original Assignee
Jingdong Technology Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Holding Co Ltd filed Critical Jingdong Technology Holding Co Ltd
Priority to CN202210460789.3A priority Critical patent/CN115022697A/en
Publication of CN115022697A publication Critical patent/CN115022697A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a display method, an electronic device, and a program product of a video added with content elements, which relate to the field of computer technologies and include: acquiring a source video to be processed and acquiring content elements corresponding to the source video; playing the source video and simultaneously displaying the content elements on the source video; responding to the synthesis instruction, acquiring a video frame in the source video, and acquiring an element picture of a content element and a display proportion of the element picture displayed when the video frame is played; and generating picture resources comprising video frames and content elements according to the preset video generation resolution and display proportion, and generating a composite video according to the picture resources corresponding to the video frames. According to the scheme provided by the disclosure, when the composite video is previewed, the composite operation on the source video and the content elements is not needed, and only the content elements are displayed on the upper layer of the source video when the source video is played, so that the previewing speed of the display effect of the composite video can be improved.

Description

Method for displaying video added with content element, electronic device and program product
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an electronic device, and a program product for displaying a video to which a content element is added.
Background
At present, a function of adding elements to a video, such as adding a document, a pattern, animation, and the like to the video, is generally provided in a terminal device.
In the related art scheme, the terminal device may synthesize the video and the element selected by the user and display the synthesized video, and if the user is satisfied with the synthesized video, the terminal device may be operated to store or release the synthesized video.
However, in the current video synthesis scheme, videos need to be synthesized and then displayed, so that the video preview speed is low, and the user experience is poor.
Disclosure of Invention
The disclosure provides a display method of a video added with content elements, electronic equipment and a program product, which are used for solving the problem that in the prior art, the video needs to be synthesized and then displayed, so that the video preview speed is low, and the user experience is poor.
A first aspect of the present disclosure is to provide a method for displaying a video added with a content element, including:
acquiring a source video to be processed and acquiring content elements corresponding to the source video;
playing the source video and simultaneously displaying the content elements on the source video so as to show the effect of adding the content elements in the source video;
responding to a synthesis instruction, acquiring a video frame in the source video, and acquiring an element picture of a content element displayed when the video frame is played and a display proportion of the element picture;
and generating picture resources comprising the video frames and the content elements according to a preset video generation resolution and the display proportion, and generating a composite video according to the picture resources corresponding to the video frames.
A first aspect of the present disclosure is to provide a presentation apparatus of a video added with a content element, including:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a source video to be processed and acquiring a content element corresponding to the source video;
the preview unit is used for playing the source video and simultaneously displaying the content elements on the source video so as to show the effect of adding the content elements in the source video;
the synthesizing unit is used for responding to a synthesizing instruction, acquiring a video frame in the source video, and acquiring an element picture of a content element displayed when the video frame is played and a display proportion of the element picture; and generating picture resources comprising the video frames and the content elements according to a preset video generation resolution and the display proportion, and generating a composite video according to the picture resources corresponding to the video frames.
A third aspect of the present disclosure is to provide an electronic device, including:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method as described in the first aspect above.
A fourth aspect of the present disclosure is to provide a computer readable storage medium having stored thereon a computer program for execution by a processor to implement the method as described in the first aspect above.
A fifth aspect of the disclosure is to provide a computer program product comprising a computer program which, when executed by a processor, implements the method as described in the first aspect above.
The display method of the video added with the content elements, the electronic device and the program product have the technical effects that:
the present disclosure provides a display method, an electronic device, and a program product of a video added with a content element, including: acquiring a source video to be processed and acquiring content elements corresponding to the source video; playing the source video and simultaneously displaying the content elements on the source video so as to show the effect of adding the content elements in the source video; responding to the synthesis instruction, acquiring a video frame in the source video, and acquiring an element picture of a content element and a display proportion of the element picture displayed when the video frame is played; and generating picture resources comprising video frames and content elements according to the preset video generation resolution and display proportion, and generating a composite video according to the picture resources corresponding to the video frames. According to the method provided by the disclosure, when the composite video is previewed, the composite operation of the source video and the content elements is not needed, the content elements are only needed to be displayed on the upper layer when the source video is played, and after a composite instruction of a user is received, the composite video is generated based on the effect of adding the content elements in the source video, so that the previewing speed of the display effect of the composite video can be improved.
Drawings
FIG. 1 is a schematic diagram of an application interface shown in an exemplary embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a method for presenting a video added with a content element according to an exemplary embodiment of the disclosure;
FIG. 3 illustrates an interface diagram for an exemplary embodiment of the present disclosure;
FIG. 4 illustrates an interface schematic for another exemplary embodiment of the present disclosure;
fig. 5 is a flowchart illustrating a method for presenting a video added with a content element according to another exemplary embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a presentation apparatus for a video added with a content element according to an exemplary embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a presentation apparatus for a video added with a content element according to another exemplary embodiment of the present disclosure;
fig. 8 is a block diagram of an electronic device shown in an exemplary embodiment of the present disclosure.
Detailed Description
Fig. 1 is a schematic diagram of an application interface according to an exemplary embodiment of the present disclosure.
As shown in fig. 1, the user can select a source video and can also select a content element desired to be added to the source video, and thereafter, the terminal device can generate a composite video from the source video and the content element and display it.
For example, the user may select a financial product that the user wishes to share, and the mobile terminal may generate the source video from the financial product. In the implementation mode, the source video can be selected by selecting the financial product to be shared.
For example, the terminal device may capture a picture in the source video, add a content element to the captured picture, combine the pictures with the content element added to obtain a composite video, and play the composite video for preview.
However, this approach results in the terminal device needing to synthesize pictures frequently, which in turn results in a slow video preview speed.
Fig. 2 is a flowchart illustrating a method for presenting a video added with a content element according to an exemplary embodiment of the present disclosure.
As shown in fig. 2, the method for displaying a video added with a content element provided by the present disclosure includes:
step 201, a source video to be processed is obtained, and a content element corresponding to the source video is obtained.
The method provided by the present disclosure may be executed by an electronic device with computing capability, which may be, for example, a mobile terminal, such as a smartphone.
Specifically, the mobile terminal may obtain a source video, where the source video refers to a video to which a special effect needs to be added, for example, an existing video may be selected by a user as the source video. The source video may also be generated by the mobile terminal, for example, in a financial-related application, the mobile terminal may generate a source video corresponding to a financial product for each financial product purchased by the user.
Further, the mobile terminal may also obtain content elements corresponding to the source video. The content element refers to an element that needs to be added to the source video, and may be, for example, an animation, a file, a picture, or the like.
In practical application, the content element may be configured by a user, or may be obtained by the mobile terminal according to the content in the source video. For example, if the mobile terminal determines that the profit of the user is N and the status is in an increase according to the financial product, the mobile terminal may acquire an animation corresponding to the increase and use the animation as a content element of the source video, or may use N as a file element of the source video.
The flexibility of adding the content elements in the source video can be improved by freely selecting the content elements by the user; the content elements are acquired by the mobile terminal according to the content of the source video, so that the styles of the content elements added in the source video can be unified when different users operate.
Step 202, playing the source video and simultaneously displaying the content element on the source video to show the effect of adding the content element in the source video.
The mobile terminal can play the acquired source video, and simultaneously display the acquired content elements on the played source video during playing, so that the preview speed can be improved by previewing the source video added with the content elements in the mode.
In the prior art, the source video and the content element need to be synthesized first, and the synthesized video needs to be played so as to preview, but the processing procedure is too long, which results in long time for previewing.
Fig. 3 illustrates an interface schematic for an exemplary embodiment of the present disclosure.
As shown in fig. 3, a screen of the mobile terminal playing the source video alone is as shown in fig. 3, and a screen of the source video itself may be displayed in the interface.
Fig. 4 illustrates an interface schematic for another exemplary embodiment of the present disclosure.
As shown in fig. 4, when playing the source video, the mobile terminal may further add a content element 42 on the basis of the frame 41 of the source video, and further display a frame after the content element is added in the source video for the user to preview.
The display position of the content element may be configured in advance, for example, for each content element, information such as the display position may be configured in advance for the content element. The configuration may be performed by the user, or the mobile terminal may perform the configuration according to the content in the source video.
If the display position of the content element is configured by the user, the user can also adjust the display position of the content element by changing the configuration. If the display positions of the content elements are configured in advance, the positions of the content elements added in the source video when different users perform operations may be made uniform.
When the content elements are displayed superimposed on the source video, the display scale of the content elements can also be determined. For example, the display ratio of the content element may be configured in advance, and the display ratio of the content element may be determined according to the size of the source video and the size of the content element. For example, the picture ratio between the picture of the content element and the picture of the source video may be preset, and the display ratio of the content element may be determined based on the picture and the picture ratio of the source video. For example, the content element needs to be displayed 10% reduced or 20% enlarged.
In an alternative implementation manner, the display proportions of the content elements displayed in each frame of the picture are the same; in another alternative implementation manner, the display proportions of the content elements displayed in each frame of the picture may be different or partially the same, for example, the display proportions of different pictures may be different when the sizes of multiple pictures in the content elements are different.
Step 203, responding to the synthesis instruction, acquiring the video frame in the source video, and acquiring the element picture of the content element and the display proportion of the element picture displayed when the video frame is played.
After previewing the effect of superimposing the content element on the source video, if the effect is satisfied, the user may operate the electronic device and send a composition instruction to the electronic device. For example, a composition key may be set in the preview interface, and the user may click the key to trigger the electronic device to generate a composite video based on the currently displayed effect.
Specifically, when generating the composite video, the electronic device may acquire video frames in the source video, may acquire the video frames frame by frame, or may acquire all the video frames.
Further, an element picture of a content element displayed when the video frame is played can be acquired. For example, when the content element is an animation, animation pictures displayed when different video frames are played may be different, and in this case, animation pictures corresponding to each video frame may be acquired, and these animation pictures may be different. For another example, when the content element is a document content, the document pictures displayed when different video frames are played may be the same, in which case, the document pictures corresponding to each video frame can be acquired, and these document pictures may be the same.
In practical application, the display proportion of the element picture of the corresponding content element can be obtained when the video frame is played. In one case, the display proportions of the respective element pictures in the content elements may be the same; in another case, the display proportions of the different element pictures may be the same or different.
And 204, generating picture resources including video frames and content elements according to the preset video generation resolution and display proportion, and generating a composite video according to the picture resources corresponding to the video frames.
The electronic equipment can superpose the element pictures on the video frames according to the display proportion of the element pictures to obtain a composite image, and then the composite image is adjusted to a preset video generation resolution ratio to obtain picture resources comprising the video frames and the content elements.
For example, the adjusted elemental pictures and video frame pictures may be encoded to obtain a composite image.
Specifically, a plurality of picture resources can be combined to finally obtain a composite video. For example, the first picture resource of the first video frame and the nth picture resource of the nth video frame of the second picture resource … of the first video frame may be connected to obtain a composite video including N frames of images.
The method for displaying the video added with the content elements comprises the following steps: acquiring a source video to be processed and acquiring content elements corresponding to the source video; playing the source video and simultaneously displaying the content elements on the source video so as to show the effect of adding the content elements in the source video; responding to the synthesis instruction, acquiring a video frame in the source video, and acquiring an element picture of a content element and a display proportion of the element picture displayed when the video frame is played; generating picture resources comprising video frames and content elements according to preset video generation resolution and display proportion, and generating a synthesized video according to the picture resources corresponding to each video frame. According to the method provided by the disclosure, when the composite video is previewed, the composition operation of the source video and the content elements is not needed, only the content elements are displayed on the upper layer of the source video when the source video is played, and after a composition instruction of a user is received, the composite video is generated based on the effect of adding the content elements in the source video, so that the preview speed of the display effect of the composite video can be improved.
Fig. 5 is a flowchart illustrating a method for presenting a video added with a content element according to another exemplary embodiment of the present disclosure.
Step 501, preloading a plurality of prepared source videos, and determining a source video to be processed in the prepared source videos according to selection operation of a user.
The mobile terminal can also preload a plurality of standby source videos, and the standby source videos refer to available videos, so that the video preview speed is further improved.
In an application scenario, the scheme of the disclosure can be applied to an application program with a function of purchasing financial products, and a user can share the income condition of the financial products purchased by the user. In the application scenario, a user can select a product which wants to share benefits from a plurality of purchased financial products, and the mobile terminal can generate a source video according to data of the product purchased by the user and add content elements to the source video for displaying.
When the financial products purchased by the user are more, the mobile terminal can generate the prepared source videos of the financial products in advance based on user operation and pre-load the prepared source videos, so that when the user selects the corresponding financial products to share, the mobile terminal can quickly acquire the source videos.
Specifically, for example, the user purchases 5 financial products in the application program, and the user may operate the sharing function in the application program and select a product to be shared from the financial products. For example, if the user browses a first financial product currently, the mobile terminal may preload a source video of the first financial product, and may preload source videos of second and third financial products, so that if the user selects the second or third financial product, the mobile terminal can quickly preview the video of the corresponding financial product.
In practical application, the mobile terminal can acquire data of the financial product to generate the prepared source video, so that the prepared source video can be preloaded. For example, a picture of a financial product may be generated and used as a frame of the pre-source video, and the pre-source video including the profit trend of the financial product may also be generated.
Further, the user may select a source video that requires the addition of a content element. In one embodiment, the user may select a financial product that the user wishes to share, and the mobile terminal may use a prepared source video corresponding to the financial product as a current source video to be processed.
In practice, the content element includes any one of the following elements: animation elements, video elements, pattern elements, and literary elements. Different content elements and different processing flows can exist.
The animation element refers to a preset animation, such as a motion picture, a video of animation material, and the like.
Video elements refer to video content such as a person, a scene, and the like.
A pattern element refers to a pattern.
The file element refers to the text content, such as "rising".
Step 502, if the content element includes any one of an animation element, a video element, and a pattern element, content information in the source video is acquired.
Step 503, determining the matched content element in the preset element library according to the content information.
In an optional implementation manner, the content elements may include any one of animation elements, video elements, and pattern elements, and in order to make the styles of the composite videos generated by operations performed by different users consistent, in the solution provided by the present disclosure, an element library may be preset, where the element library stores preset animation elements, video elements, pattern elements, and the like.
The preset content elements correspond to content information in the source video, for example, some content elements correspond to "rising", for example, some content elements correspond to "falling", for example, and the like.
Specifically, the mobile terminal may obtain content information in the source video, specifically obtain benefit information of the financial product therefrom, and determine the matched content elements in the element library.
Further, the determined matching content element may be an animation type element, a video type element, or a pattern type element.
In another embodiment, the user may select a content element type, and the mobile terminal may determine, based on the user selection and the content information of the source video, a content element that is identical to the content information and that corresponds to the content element type selected by the user in the preset element library.
In an optional implementation, the content element includes an animation element and/or a video element, in this implementation, after the mobile terminal generates the preliminary source video and the animation element and/or the video element corresponding to the preliminary source video, the mobile terminal may click the preliminary source video and the animation element and/or the video element to determine a time correspondence relationship between the preliminary source video and the animation element and/or the video element when the preliminary source video and the animation element and/or the video element are played, and then after the user selects the source video, when previewing the source video added with the content element, the source video, the animation element and/or the video element may be rapidly played directly based on the correspondence relationship, so as to further improve the preview speed.
The mobile terminal can specifically determine the time corresponding relation between the content time and the element time according to the content time of the prepared source video and the element time of the animation element and/or the video element; the temporal correspondence is used to display the content element on top of the preliminary source video when the preliminary source video is played.
The content time of the prepared source video and the element time of the animation element and/or the video element may be aligned in the time coordinate, for example, the animation element and/or the video element may be adjusted to make the playing time of the animation element and/or the video element the same as the playing time of the source video, and then the two may be aligned. The specific adjustment mode may be frame loss, frame addition, etc.
Step 504, if the content element includes a copy element, obtaining the copy content input by the user for adding to the source video.
Specifically, if the content element added to the source video is a document element, the corresponding document content may be input by the user, and the mobile terminal may obtain the document content input by the user and use the document content as the document element.
Further, by inputting the document elements by the user, the flexibility of the content elements for addition to the source video can be improved.
And 505, playing the source video and simultaneously displaying the content elements on the source video so as to show the effect of adding the content elements in the source video.
In practical application, if the content elements added to the source video include pattern elements and document elements, when the source video is played, corresponding patterns and documents can be added to each frame of the played picture, so as to preview the picture.
Wherein, if the content element includes an animation element and/or a video element, step 505 further includes:
determining a time corresponding relation between the content time and the element time according to the content time of the source video and the element time of the animation element and/or the video element;
and when the source video is played, simultaneously playing the animation elements and/or the video elements on the source video according to the time corresponding relation.
Specifically, the mobile terminal may perform time alignment with the animation element and/or the video element that needs to be added after the source video is loaded, or perform time alignment before performing preview when preview is needed.
Further, the time correspondence may be obtained by specifically determining the correspondence between each content time in the source video and each element time in the animation element and/or the video element according to the total content time of the source video and the total element time of the animation element and/or the video element.
For example, the total duration of the content is T1, and the total duration of the element is T2, the total duration of the element can be adjusted from T2 to T1 by adjusting the animation element and/or the video element. For example, if T2 is greater than T1, frames in the animation element and/or video element may be dropped to shorten the duration of their playback. For another example, if T2 is less than T1, the frames in the animation element and/or the video element may be added to increase the playing duration, and the added frames may be the existing frames in the animation element and/or the video element, for example, some of the frames may be played repeatedly to increase the playing duration.
In actual application, after the duration is adjusted, the updated total duration of the elements can be obtained, and the updated total duration of the elements corresponds to the total duration of the content, so that the frame picture corresponding to the moment in the source video and the frame picture corresponding to the moment in the animation elements and/or the video elements can be played at the same time, and the preview effect is achieved.
Step 506, responding to the synthesizing instruction, acquiring the video frame in the source video, and acquiring the element picture of the content element and the display proportion of the element picture displayed when the video frame is played.
Step 507, generating picture resources including video frames and content elements according to preset video generation resolution and display proportion, and generating a composite video according to the picture resources corresponding to each video frame.
Fig. 6 is a schematic structural diagram of a presentation apparatus for a video added with a content element according to an exemplary embodiment of the present disclosure.
As shown in fig. 6, the present disclosure provides a presentation apparatus 600 for a video added with a content element, including:
an obtaining unit 610, configured to obtain a source video to be processed, and obtain a content element corresponding to the source video;
a preview unit 620, configured to play the source video and simultaneously display the content element on the source video to show an effect of adding the content element in the source video;
a synthesizing unit 630, configured to respond to a synthesizing instruction, acquire a video frame in the source video, and acquire an element picture of a content element displayed when the video frame is played and a display ratio of the element picture; generating picture resources including the video frames and the content elements according to a preset video generation resolution and the display proportion, and generating a composite video according to the picture resources corresponding to the video frames.
According to the video display device added with the content elements, when the composite video is previewed, the source video and the content elements do not need to be combined, and only the content elements need to be displayed on the upper layer of the source video when the source video is played, so that the previewing speed of the display effect of the composite video can be improved.
Fig. 7 is a schematic structural diagram of a presentation apparatus for a video to which a content element is added according to another exemplary embodiment of the present disclosure.
As shown in fig. 7, in a presentation apparatus 700 of a video added with content elements provided by the present disclosure, on the basis of the above-described embodiment:
the content element comprises any one of the following elements:
animation elements, video elements, pattern elements, and literary elements.
Optionally, if the content element includes any one of an animation element, a video element, and a pattern element, the obtaining unit 610 includes a first obtaining module 611, configured to:
acquiring content information in the source video;
and determining the matched content elements in a preset element library according to the content information.
Optionally, if the content element includes a document element, the obtaining unit 610 includes a second obtaining module 612, configured to:
and acquiring the file content input by the user for adding into the source video.
Optionally, if the content element includes an animation element and/or a video element, the preview unit 620 includes:
an alignment module 621, configured to determine a time correspondence between the content time and the element time according to the content time of the source video and the element time of the animation element and/or the video element;
the playing module 622 is configured to play the animation element and/or the video element on the source video according to the time correspondence when playing the source video.
Optionally, the alignment module 621 is specifically configured to:
and determining the corresponding relation between each content time in the source video and each element time in the animation elements and/or the video elements according to the total content time of the source video and the total element time of the animation elements and/or the video elements, so as to obtain the time corresponding relation.
Optionally, the apparatus further includes a preloading unit 640, configured to, before the obtaining unit 610 obtains the source video to be processed, further include:
preloading a plurality of prepared source videos, and determining the source videos to be processed in the prepared source videos according to selection operation of a user.
Optionally, if the content element includes an animation element and/or a video element, the apparatus further includes an aligning unit 650 for, after the preloading unit 640 preloads the plurality of prepared source videos:
determining a time correspondence between the content time and the element time according to the content time of the prepared source video and the element time of the animation element and/or the video element; the time correspondence is used to display the content element on the preliminary source video when the preliminary source video is played.
Fig. 8 is a block diagram of an electronic device shown in an exemplary embodiment of the present disclosure.
As shown in fig. 8, the electronic device provided in this embodiment includes:
a memory 81;
a processor 82; and
a computer program;
wherein the computer program is stored in the memory 81 and configured to be executed by the processor 82 to implement any of the presentation methods of the video added with the content element as described above.
The present embodiments also provide a computer-readable storage medium, having stored thereon a computer program,
the computer program is executed by a processor to implement any of the above-described methods of presenting a video with a content element added thereto.
The present embodiment also provides a computer program including a program code that executes any one of the above-described methods of presenting a video to which a content element is added, when the computer program is executed by a computer.
Those of ordinary skill in the art will understand that: all or a portion of the steps for implementing the above-described method embodiments may be performed by hardware associated with program instructions. The foregoing program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present disclosure, and not for limiting the same; while the present disclosure has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present disclosure.

Claims (12)

1. A method for displaying a video added with a content element, comprising:
acquiring a source video to be processed and acquiring content elements corresponding to the source video;
playing the source video and simultaneously displaying the content elements on the source video so as to show the effect of adding the content elements in the source video;
responding to a synthesis instruction, acquiring a video frame in the source video, and acquiring an element picture of a content element displayed when the video frame is played and a display proportion of the element picture;
and generating picture resources comprising the video frames and the content elements according to a preset video generation resolution and the display proportion, and generating a composite video according to the picture resources corresponding to the video frames.
2. The method of claim 1, wherein the content element comprises any one of:
animation elements, video elements, pattern elements, and literary elements.
3. The method according to claim 2, wherein if the content element includes any one of an animation element, a video element, and a pattern element, the obtaining the content element corresponding to the source video includes:
acquiring content information in the source video;
and determining the matched content elements in a preset element library according to the content information.
4. The method of claim 2, wherein if the content element comprises a copy element, the obtaining the content element corresponding to the source video comprises:
and acquiring the file content input by the user for adding into the source video.
5. The method of claim 2, wherein if the content element comprises an animation element and/or a video element, the displaying the content element over the source video comprises:
determining a time corresponding relation between the content time and the element time according to the content time of the source video and the element time of the animation element and/or the video element;
and when the source video is played, the animation elements and/or the video elements are simultaneously played on the source video according to the time corresponding relation.
6. The method according to claim 5, wherein the determining a temporal correspondence between the content time and the element time according to the content time of the source video and the element time of the animation element and/or video element comprises:
and determining the corresponding relation between each content moment in the source video and each element moment in the animation elements and/or the video elements according to the total content duration of the source video and the total element duration of the animation elements and/or the video elements to obtain the time corresponding relation.
7. The method according to any one of claims 1-3, wherein before the obtaining the source video to be processed, further comprising:
preloading a plurality of prepared source videos, and determining the source videos to be processed in the prepared source videos according to selection operation of a user.
8. The method of claim 7, wherein if the content element comprises an animation element and/or a video element, after the preloading the plurality of alternate source videos, further comprising:
determining a time corresponding relation between the content time and the element time according to the content time of the prepared source video and the element time of the animation element and/or the video element; the temporal correspondence is used to display the content element on top of the alternate source video when the alternate source video is played.
9. A presentation apparatus for a video added with a content element, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a source video to be processed and acquiring content elements corresponding to the source video;
the preview unit is used for playing the source video and simultaneously displaying the content elements on the source video so as to show the effect of adding the content elements in the source video;
the synthesizing unit is used for responding to a synthesizing instruction, acquiring a video frame in the source video, and acquiring an element picture of a content element displayed when the video frame is played and a display proportion of the element picture; and generating picture resources comprising the video frames and the content elements according to a preset video generation resolution and the display proportion, and generating a composite video according to the picture resources corresponding to the video frames.
10. An electronic device, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any of claims 1-8.
11. A computer-readable storage medium, having stored thereon a computer program,
the computer program is executed by a processor to implement the method according to any one of claims 1 to 8.
12. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-8.
CN202210460789.3A 2022-04-28 2022-04-28 Method for displaying video added with content element, electronic device and program product Pending CN115022697A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210460789.3A CN115022697A (en) 2022-04-28 2022-04-28 Method for displaying video added with content element, electronic device and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210460789.3A CN115022697A (en) 2022-04-28 2022-04-28 Method for displaying video added with content element, electronic device and program product

Publications (1)

Publication Number Publication Date
CN115022697A true CN115022697A (en) 2022-09-06

Family

ID=83068033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210460789.3A Pending CN115022697A (en) 2022-04-28 2022-04-28 Method for displaying video added with content element, electronic device and program product

Country Status (1)

Country Link
CN (1) CN115022697A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108156520A (en) * 2017-12-29 2018-06-12 珠海市君天电子科技有限公司 Video broadcasting method, device, electronic equipment and storage medium
CN109495791A (en) * 2018-11-30 2019-03-19 北京字节跳动网络技术有限公司 A kind of adding method, device, electronic equipment and the readable medium of video paster
CN110582018A (en) * 2019-09-16 2019-12-17 腾讯科技(深圳)有限公司 Video file processing method, related device and equipment
WO2021249414A1 (en) * 2020-06-10 2021-12-16 阿里巴巴集团控股有限公司 Data processing method and system, related device, and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108156520A (en) * 2017-12-29 2018-06-12 珠海市君天电子科技有限公司 Video broadcasting method, device, electronic equipment and storage medium
CN109495791A (en) * 2018-11-30 2019-03-19 北京字节跳动网络技术有限公司 A kind of adding method, device, electronic equipment and the readable medium of video paster
CN110582018A (en) * 2019-09-16 2019-12-17 腾讯科技(深圳)有限公司 Video file processing method, related device and equipment
WO2021249414A1 (en) * 2020-06-10 2021-12-16 阿里巴巴集团控股有限公司 Data processing method and system, related device, and storage medium

Similar Documents

Publication Publication Date Title
US10735798B2 (en) Video broadcast system and a method of disseminating video content
US11265614B2 (en) Information sharing method and device, storage medium and electronic device
Krähenbühl et al. A system for retargeting of streaming video
US9852768B1 (en) Video editing using mobile terminal and remote computer
US8935611B2 (en) Network-based rendering and steering of visual effects
CN112804459A (en) Image display method and device based on virtual camera, storage medium and electronic equipment
US10546557B2 (en) Removing overlays from a screen to separately record screens and overlays in a digital medium environment
WO2017171975A1 (en) Video with selectable tag overlay auxiliary pictures
US11363088B1 (en) Methods and apparatus for receiving virtual relocation during a network conference
CN113542624A (en) Method and device for generating commodity object explanation video
CN113660528A (en) Video synthesis method and device, electronic equipment and storage medium
CN112839190A (en) Method for synchronously recording or live broadcasting video of virtual image and real scene
JP2020028096A (en) Image processing apparatus, control method of the same, and program
CN113852757B (en) Video processing method, device, equipment and storage medium
Hutchison Introducing DLP 3-D TV
KR101915792B1 (en) System and Method for Inserting an Advertisement Using Face Recognition
CN113099309A (en) Video processing method and device
CN115022697A (en) Method for displaying video added with content element, electronic device and program product
CN112188269B (en) Video playing method and device and video generating method and device
TWI765230B (en) Information processing device, information processing method, and information processing program
CN111311477B (en) Image editing method and device and corresponding storage medium
CN114025237A (en) Video generation method and device and electronic equipment
JP2018074337A (en) Moving image processing device, moving image processing method, and program
KR101977108B1 (en) Hmd device for displaying vr-based presentation video and operating method thereof
Dufaux Hyper-Realistic and Immersive Imaging for Enhanced Quality of Experience

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination