CN112463017A - Interactive element synthesis method and related device - Google Patents

Interactive element synthesis method and related device Download PDF

Info

Publication number
CN112463017A
CN112463017A CN202011494849.0A CN202011494849A CN112463017A CN 112463017 A CN112463017 A CN 112463017A CN 202011494849 A CN202011494849 A CN 202011494849A CN 112463017 A CN112463017 A CN 112463017A
Authority
CN
China
Prior art keywords
target
view
sub
root
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011494849.0A
Other languages
Chinese (zh)
Other versions
CN112463017B (en
Inventor
刘美光
王楠
姜洪健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Agricultural Bank Of China Financial Technology Co ltd
Agricultural Bank of China
Original Assignee
Agricultural Bank of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agricultural Bank of China filed Critical Agricultural Bank of China
Priority to CN202011494849.0A priority Critical patent/CN112463017B/en
Publication of CN112463017A publication Critical patent/CN112463017A/en
Application granted granted Critical
Publication of CN112463017B publication Critical patent/CN112463017B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses an interactive element synthesis method and a related device, wherein the method comprises the following steps: acquiring target interaction elements required by target video synthesis; according to the target interaction element, constructing a target root view and a target sub view required by the target interaction element in a memory; the root view is an abstract example used for identifying the interactive elements in the memory, and the sub-view is an abstract example used for realizing functions required by the interactive elements in the memory; and synthesizing the target root view and the target sub view into the target video to generate a synthesized video with the target interaction elements. Therefore, the target root view and the target sub view are abstract examples in the memory, when the target root view and the target sub view are synthesized into the target video, pictures corresponding to interactive elements do not need to be obtained from a disk through a large number of read-write operations, and therefore the processing time of file read-write is reduced, the waiting time of a video making user is shortened, and the experience of the user is improved.

Description

Interactive element synthesis method and related device
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to an interactive element synthesis method and a related apparatus.
Background
With the development of science and technology, the forms of people recording life and mood are more and more diversified. And gradually converting the characters from the beginning to pictures and videos. The video is used as a content form integrating characters, images and audio, so that the recording form is more vivid, and particularly, the short video is more suitable for the fragmented use habit of users by the characteristic of short video, so that more and more users are attracted.
In order to improve the experience of the user and enrich the playing effect of the video, in the related technology, the user is allowed to add interactive elements, such as props, doodles, expressions and other elements, in the video. Since there are multiple video frames in a video, it is necessary to synthesize an interactive element into each video frame in the video, thereby completing the addition of the interactive element to the video.
However, as the number of the interactive elements added to the video increases, the video processing time increases, and correspondingly, the waiting time of the user increases, which is contrary to the original intention of improving the user experience.
Disclosure of Invention
In view of the above problems, the present application provides an interactive element synthesis method and a related apparatus, which are used to reduce the time for synthesizing an interactive element into a video and improve the experience of a user.
One aspect of the present application provides a method for synthesizing an interactive element, including:
acquiring target interaction elements required by target video synthesis;
according to the target interaction element, constructing a target root view and a target sub view required by the target interaction element in a memory; the root view is an abstract example used for identifying the interactive elements in the memory, and the sub-view is an abstract example used for realizing functions required by the interactive elements in the memory;
and synthesizing the target root view and the target sub view into the target video to generate a synthesized video with the target interaction elements.
Optionally, the method further includes:
acquiring interactive elements required by video synthesis;
constructing a view required by the interactive element according to the function of the interactive element, and storing the view in the memory in a tree structure; wherein the view comprises one of the root view and a plurality of the child views.
Optionally, the constructing, according to the target interaction element, a target root view and a target sub-view required by the target interaction element in a memory includes:
constructing the target root view in the memory according to the target interaction elements;
adding the target child view, which is required by the target interaction element and inherits the attribute of the target root view, in the target root view.
Optionally, the attribute of the target root view includes a cache block, and the synthesizing the target root view and the target child view into the target video includes:
the target sub-view stores the result of the execution action into the cache block included in the target sub-view;
and rendering the target video according to the result.
Optionally, the attribute of the target root view includes a boundary, and the synthesizing the target root view and the target sub-view into the target video includes:
traversing the position attribute information of the target sub-view;
if the position of the target sub-view exceeds the boundary of the target root view, deleting the part exceeding the boundary of the target root view in the target sub-view; if the position of the target sub-view is located in the boundary of the target root view, reserving the target sub-view;
and synthesizing the target root view and the target sub-view in the boundary of the target root view into the target video.
Optionally, the attribute of the root view includes at least one of:
width, height, position, transparency, boundary, cache block, add child view, and remove child views beyond the root view boundary.
Optionally, the sub-view includes at least one of:
a graffiti sub-view and a touch sub-view;
wherein the properties of the graffiti sub-view include at least one of:
a brush list, a path list, a mode switch and a doodle event;
the properties of the touch sub-view include at least one of:
scaling size, aspect ratio, center point coordinates, obtaining rotation angle, obtaining touch distance, and touch event processing.
In another aspect, the present application provides an interactive element synthesizing apparatus, the apparatus comprising: an acquisition unit, a construction unit and a synthesis unit;
the acquisition unit is used for acquiring target interaction elements required by target video synthesis;
the construction unit is used for constructing a target root view and a target sub view required by the target interaction element in a memory according to the target interaction element; the root view is an abstract example used for identifying the interactive elements in the memory, and the sub-view is an abstract example used for realizing functions required by the interactive elements in the memory;
the synthesizing unit is used for synthesizing the target root view and the target sub view into the target video to generate a synthesized video with the target interaction elements.
In another aspect, the present application provides a computer device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method of the above aspect according to instructions in the program code.
In another aspect, the present application provides a computer readable storage medium for storing a computer program for performing the method of the above aspect.
Compared with the prior art, the technical scheme of the application has the advantages that:
according to the technical scheme, the target root view and the target sub-view which are required by the target interactive element can be constructed in the memory by acquiring the target interactive element required by the target video synthesis, the target root view and the target sub-view are abstract examples in the memory, and no file is generated in the process of synthesizing the target root view and the target sub-view into the target video, so that the file reading and writing operation is not involved, namely the synthesis process is completed in the memory, and pictures corresponding to the interactive element are not required to be acquired from a disk through a large amount of reading and writing operations, so that the processing time of reading and writing the file is reduced, the waiting time of a video making user is shortened, and the experience of the user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for synthesizing an interactive element according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a view logic provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a physical structure of a view provided by an embodiment of the present application;
fig. 4 is a schematic diagram of an interactive element synthesizing apparatus according to an embodiment of the present application;
fig. 5 is a block diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the related art, the interactive elements added to the video are generally converted into pictures, then synthesized with each video frame in the video, and finally synthesized with all video frames, thereby completing the addition of the interactive elements to the video. However, as the number of the added interactive elements increases, no matter whether each interactive element is converted into a picture and then synthesized with each video frame, or whether a plurality of interactive elements are converted into pictures and then synthesized with the pictures of the interactive elements and then synthesized with each video frame, the video processing time is long, so that the waiting time of a video production user is long, and the experience of the user is reduced.
Research shows that each interactive element is stored on a disk in a file form after being converted into a picture, when the picture is synthesized with each video frame in a video, a corresponding picture needs to be obtained from the disk through reading and writing operations, and with the increase of the number of the interactive elements, the operations related to reading and writing the file are increased, so that the video processing time is increased.
Based on the above, the application provides an interactive element synthesis method, which is used for reducing read-write operation when synthesizing interactive elements for videos, so that the video processing time is reduced, and the experience of users is further improved.
In order to facilitate understanding of the technical solution of the present application, a video processing method provided in the embodiments of the present application is described below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of a method for synthesizing an interactive element according to an embodiment of the present disclosure. As shown in fig. 1, the video processing method includes the steps of:
s101: and acquiring target interaction elements required by target video synthesis.
In practical application, a user starts video synthesis software by using terminal equipment, and adds a target interaction element to be added to a target video after selecting the target video. It can be understood that the interactive element composition method provided by the embodiment of the present application may be an interactive element composition function provided by the terminal device, or an interactive element composition service provided by the video composition software installed in the terminal device to the user.
The target interactive elements are interactive elements which the user wants to add to the target video, and the types of the interactive elements are various, such as graffiti, characters, expressions and the like. The number and the type of the target interaction elements are not particularly limited in the embodiment of the application. For example, a user wants to add three target interaction elements, one scribble and two emoticons, in a target video.
S102: and according to the target interaction element, constructing a target root view and a target sub view required by the target interaction element in a memory.
The root view is an abstract instance used for identifying interactive elements in the memory, and comprises child views corresponding to different interactive elements, which is equivalent to a container of the child views. By performing abstract feature constraints on the interactive elements, the root view includes at least one attribute, and different attributes identify different attribute information of the interactive elements. According to the target interaction element, a numerical value corresponding to the root view attribute can be set, and a target root view required by the target interaction element is constructed in the memory.
The attributes of the root view can be divided into attributes for identifying key features of the interactive elements and attributes for identifying key actions of the interactive elements, so that the interactive elements can exist in an abstract instance in the memory. The attribute for identifying the key features of the interactive elements may be width, height, position, transparency, boundary, cache block, etc., and the attribute for identifying the key actions of the interactive elements may be adding a child view and removing a child view beyond the boundary of the root view, etc. The properties of the root view are explained below.
(1) The width and height may be set to specific values, or may be set to the size of the root view relative to the video, for example, the width of the target root view is set to half the width of the target video.
(2) The position is the position of the root view relative to the video, e.g., the target root view is centered or horizontally centered in the target video.
(3) The transparency is the transparency and translucency of the root view and can be set to any value between 0 and 100, and a larger value indicates a higher transparency, for example, the transparency of the root view of the object can be initialized to 100.
(4) The boundary is the boundary size that the root view occupies in the video, while also characterizing the active boundary of the child view. For example, the target child view cannot exceed the boundary set by the target root view.
(5) And the cache block is used for storing the result of the action executed by the sub-view and quickly rendering the result into the video.
(6) And adding the child view for maintaining the child view required by the interactive element, wherein the maintenance form can be a list form for query. For example, the target sub-views listed in the list are added to the target root view, and the target sub-views can also be added to the target root view in the list order, so as to realize the superposition effect between different target sub-views.
(7) And removing the sub-views beyond the boundary of the root view, for example, noting whether the target sub-view in the target root view exceeds the boundary set by the target root view, so as to cut the target sub-view beyond the boundary part in a subsequent mode, or not displaying the target sub-view beyond the boundary part, and the like.
The child view is an abstract instance in memory for implementing the functions required for the interactive elements. The interactive elements have many types, the sub-views corresponding to different functions are defined through a uniform identification rule, and in order to realize the functional effects of different interactive elements through the sub-views, the sub-views also have many different types, such as a decoration sub-view, a touch sub-view, a graffiti sub-view, a prop sub-view, and the like. The sub-view comprises at least one attribute, and different attributes are used for realizing the functions of different interactive elements. According to the function of the target interaction element, the corresponding target sub-view and the value of the attribute corresponding to the target sub-view can be set, and the target sub-view required by the target interaction element is constructed in the memory.
It is understood that different sub-views have different properties, and the following description will take two sub-views as an example.
A first sub-view: scribble the child view.
The properties of the graffiti sub-view may be divided into key feature properties and key action properties. Key feature attributes include brush lists, path lists, mode switches, and the like. The brush list may store historical brushes, including, for example, thickness, color, and transparency of the brush. The path list may store historical trajectories, where a trajectory is a trajectory of a hand movement between a user pressing down the screen from a finger to leaving the screen in a graffiti mode. For example, a trace is a series of links greater than 3 pixels. The mode switching may include switching between a graffiti mode and a click mode.
Key action attributes include graffiti events, such as setting a brush attribute, undoing a recent operation, performing a drawing event, and so forth. When the doodle operation is actually performed, the doodle child view binds the touch event of the user with the doodle event, for example, the touch event on the screen of the user generates a corresponding track in the doodle mode.
And a second sub-view: and touching the sub-view.
The attributes of the touch sub-view can be divided into key feature attributes and key action attributes. Key feature attributes include zoom size, aspect ratio, center point coordinates, and the like. Wherein the zoom size is a size of the zoomed out or zoomed in sub-view. The aspect ratio is the ratio of the width and height of the sub-views. The center point coordinate is the center point of the rotated or scaled sub-view.
The key action attributes comprise acquiring a rotation angle, acquiring a touch distance, processing a touch event and the like. Wherein, acquiring the rotation angle refers to acquiring the angle size of rotation between the touch points. Acquiring the touch distance refers to acquiring the size of the distance between touch points. The touch event processing refers to processing in which a finger-down event, a finger-up event, and a finger-down event (a double-point touch event and a single-point movement event) are associated with each other during the touch.
It should be noted that the touch sub-view is used for providing functions of controlling rotation, zooming and the like of the interactive elements through gestures, and since the text interactive elements and the expression interactive elements need to be zoomed, dragged or rotated, the text interactive elements and the expression interactive elements can be abstracted into the touch sub-view in a unified manner, and the functional effects of the text interactive elements and the expression interactive elements can be realized by constructing the touch sub-view.
S103: and synthesizing the target root view and the target sub view into the target video to generate a synthesized video with the target interaction elements.
As can be seen from the foregoing, the target root view and the target sub-view are both abstract instances existing in the memory, and when the target root view and the target sub-view are synthesized into the target video, they can be directly synthesized with each frame of video frame of the target video in the memory, so as to generate a synthesized video, where the synthesized video has the target interaction elements that the user wants to synthesize, thereby completing the synthesis process.
According to the technical scheme, the target root view and the target sub-view which are required by the target interaction element can be constructed in the memory by acquiring the target interaction element required by the target video synthesis, the target root view and the target sub-view are abstract examples in the memory, and when the target root view and the target sub-view are synthesized into the target video, pictures corresponding to the interaction element are not required to be acquired from a disk through a large number of read-write operations, so that the processing time of file read-write is reduced, the waiting time of a video making user is shortened, and the experience of the user is improved.
As a possible implementation manner, the interactive elements required for video synthesis may be pre-constructed, so that the target root view and the target sub-view required for the target interactive elements may be called later, so as to further reduce the time for interactive element synthesis.
Specifically, interactive elements possibly required by video synthesis are obtained, views required by the interactive elements are constructed according to functions of the interactive elements, and the views are stored in a memory in a tree structure, wherein the views comprise a root view and a plurality of sub-views. Referring to fig. 2, a schematic diagram of a view logic provided in an embodiment of the present application is shown. In fig. 2, the view logic is a tree structure, and includes a root view, a plurality of sub-views are included under the root view, and sub-views may be further included under the sub-views. For example, in fig. 2, the root view includes a device sub-view, a touch sub-view, and a graffiti sub-view, and the decoration sub-view may further include a prop sub-view, and the like.
Therefore, the video is synthesized into interaction elements possibly needed, the view corresponding to each interaction element is constructed in advance according to the functions of the interaction elements, and the target view can be called according to the target interaction elements during subsequent use. Meanwhile, when the views are stored in the tree structure and are called subsequently, due to the hierarchical relation among the sub-views, the time for traversing the required target sub-views can be reduced, the time for synthesizing target interaction elements by the target video is further reduced, and the experience of a user is further improved.
In addition, the root view and the child views are stored through a tree structure, and the new child views can be easily expanded from a horizontal new child view or a vertical new child view. Interactive elements such as watermarks, props and the like can be abstracted into corresponding sub-views and added into the tree structure.
As a possible implementation manner, a target root view may be constructed in a memory according to a target interaction element, and then target sub-views required by the target interaction element are added in the target root view, for example, each target sub-view is added in sequence according to an attribute of the target root view. Referring to fig. 3, the figure is a schematic diagram of a physical structure of a view provided in an embodiment of the present application. The target root view is arranged on the video layer and comprises a graffiti sub-view, a touch sub-view and other self-defined sub-views. The adding sequence is touch sub-view, doodle sub-view and other self-defined sub-views, and the target sub-view added later can be seen to be superposed on the target sub-view added earlier.
The target child view may inherit the properties of the target root view in order to reduce the time to set the target child view. For example, a cache block is set in the target root view, and the target sub-views added in the target root view also have cache blocks due to the inheritance function, so that the cache blocks do not need to be set for each target sub-view, the time for constructing the target sub-views is reduced, the time for synthesizing target interaction elements by the target videos is further reduced, and the experience of users is further improved. Meanwhile, the target sub-view stores the result of the execution understanding into a cache included in the target sub-view, so that the target video can be quickly rendered according to the result, the speed of synthesizing the target interactive elements by the target video is increased, the synthesizing time is reduced, and the experience of a user is further improved.
In addition, in the related art, the type of the new target interaction element is different from that of other target interaction elements, and as the target interaction element is continuously added in the target video in the later period, the maintainability of the target interaction element is poor, and the maintenance cost is higher and higher. According to the technical scheme provided by the embodiment of the application, the target sub-view can inherit the attribute of the target root view, and the target sub-view and the target root view belong to the same type of view, so that the combination of the target root view and the target sub-view can be regarded as the same type of view, the maintainability of target interaction elements is improved, and the maintenance cost is reduced.
As a possible implementation manner, when the attribute of the target root view includes a boundary, in order to ensure the effect that the target interaction element is synthesized to the target video and improve the experience of the user, the target sub-view beyond the boundary of the target root view may be clipped.
Specifically, traversing the position attribute information of all target sub-views in the target root view, if the position of a certain target sub-view exceeds the boundary of the target root view, deleting the part exceeding the boundary in the target sub-view, and keeping the part not exceeding the boundary in the target sub-view; and if the position of one target sub-view is positioned in the boundary of the target root view, reserving the target sub-view. And finally, synthesizing the target sub-view and the target root view positioned in the boundary of the target root view into the target video.
In addition to the interactive element synthesis method provided in the embodiments of the present application, an interactive element synthesis apparatus is also provided, as shown in fig. 4, including: an acquisition unit 401, a construction unit 402, and a synthesis unit 403;
the obtaining unit 401 is configured to obtain a target interaction element required by target video synthesis;
the constructing unit 402 is configured to construct, according to the target interaction element, a target root view and a target sub-view, which are required by the target interaction element, in a memory; the root view is an abstract example used for identifying the interactive elements in the memory, and the sub-view is an abstract example used for realizing functions required by the interactive elements in the memory;
the synthesizing unit 403 is configured to synthesize the target root view and the target sub-view into the target video, and generate a synthesized video with the target interaction element.
As a possible implementation, the apparatus is further configured to:
acquiring interactive elements required by video synthesis;
constructing a view required by the interactive element according to the function of the interactive element, and storing the view in the memory in a tree structure; wherein the view comprises one of the root view and a plurality of the child views.
As a possible implementation manner, the building unit 402 is configured to:
constructing the target root view in the memory according to the target interaction elements;
adding the target child view, which is required by the target interaction element and inherits the attribute of the target root view, in the target root view.
As a possible implementation manner, the attributes of the target root view include cache blocks, and the synthesizing unit 403 is configured to:
the target sub-view stores the result of the execution action into the cache block included in the target sub-view;
and rendering the target video according to the result.
As a possible implementation manner, the attribute of the target root view includes a boundary, and the synthesizing unit 403 is configured to:
traversing the position attribute information of the target sub-view;
if the position of the target sub-view exceeds the boundary of the target root view, deleting the part exceeding the boundary of the target root view in the target sub-view; if the position of the target sub-view is located in the boundary of the target root view, reserving the target sub-view;
and synthesizing the target root view and the target sub-view in the boundary of the target root view into the target video.
As a possible implementation, the properties of the root view include at least one of:
width, height, position, transparency, boundary, cache block, add child view, and remove child views beyond the root view boundary.
As a possible implementation, the sub-view includes at least one of:
a graffiti sub-view and a touch sub-view;
wherein the properties of the graffiti sub-view include at least one of:
a brush list, a path list, a mode switch and a doodle event;
the properties of the touch sub-view include at least one of:
scaling size, aspect ratio, center point coordinates, obtaining rotation angle, obtaining touch distance, and touch event processing.
The embodiment of the application provides an interactive element synthesizing device, a target root view and a target sub view which are required by a target interactive element can be constructed in a memory by acquiring the target interactive element required by target video synthesis, the target root view and the target sub view are abstract examples in the memory, and when the target root view and the target sub view are synthesized into a target video, pictures corresponding to the interactive element are not required to be acquired from a disk through a large amount of reading and writing operations, so that the processing time of file reading and writing is reduced, the waiting time of a video making user is shortened, and the experience of the user is improved.
An embodiment of the present application further provides a computer device, referring to fig. 5, which shows a structural diagram of a computer device provided in an embodiment of the present application, and as shown in fig. 5, the device includes a processor 510 and a memory 520:
the memory 510 is used for storing program codes and transmitting the program codes to the processor;
the processor 520 is configured to execute any one of the interactive element synthesis methods provided in the above embodiments according to the instructions in the program code.
The embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium is used for storing a computer program, and the computer program is used for executing any one of the interactive element synthesis methods provided in the above embodiments.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, it is relatively simple to describe, and reference may be made to some descriptions of the method embodiment for relevant points. The above-described apparatus embodiments are merely illustrative, and the units and modules described as separate components may or may not be physically separate. In addition, some or all of the units and modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The foregoing is directed to embodiments of the present application and it is noted that numerous modifications and adaptations may be made by those skilled in the art without departing from the principles of the present application and are intended to be within the scope of the present application.

Claims (10)

1. An interactive element synthesis method, comprising:
acquiring target interaction elements required by target video synthesis;
according to the target interaction element, constructing a target root view and a target sub view required by the target interaction element in a memory; the root view is an abstract example used for identifying the interactive elements in the memory, and the sub-view is an abstract example used for realizing functions required by the interactive elements in the memory;
and synthesizing the target root view and the target sub view into the target video to generate a synthesized video with the target interaction elements.
2. The method of claim 1, further comprising:
acquiring interactive elements required by video synthesis;
constructing a view required by the interactive element according to the function of the interactive element, and storing the view in the memory in a tree structure; wherein the view comprises one of the root view and a plurality of the child views.
3. The method according to claim 1, wherein the constructing a target root view and a target sub-view required by the target interactive element in the memory according to the target interactive element comprises:
constructing the target root view in the memory according to the target interaction elements;
adding the target child view, which is required by the target interaction element and inherits the attribute of the target root view, in the target root view.
4. The method of claim 3, wherein the attributes of the target root view comprise cache blocks, and wherein the compositing the target root view and the target sub-view into the target video comprises:
the target sub-view stores the result of the execution action into the cache block included in the target sub-view;
and rendering the target video according to the result.
5. The method of claim 1, wherein the properties of the target root view include boundaries, and wherein compositing the target root view and the target sub-view into the target video comprises:
traversing the position attribute information of the target sub-view;
if the position of the target sub-view exceeds the boundary of the target root view, deleting the part exceeding the boundary of the target root view in the target sub-view; if the position of the target sub-view is located in the boundary of the target root view, reserving the target sub-view;
and synthesizing the target root view and the target sub-view in the boundary of the target root view into the target video.
6. The method of claim 1, wherein the properties of the root view comprise at least one of:
width, height, position, transparency, boundary, cache block, add child view, and remove child views beyond the root view boundary.
7. The method of claim 1, wherein the sub-view comprises at least one of:
a graffiti sub-view and a touch sub-view;
wherein the properties of the graffiti sub-view include at least one of:
a brush list, a path list, a mode switch and a doodle event;
the properties of the touch sub-view include at least one of:
scaling size, aspect ratio, center point coordinates, obtaining rotation angle, obtaining touch distance, and touch event processing.
8. An interactive element composition apparatus, comprising: an acquisition unit, a construction unit and a synthesis unit;
the acquisition unit is used for acquiring target interaction elements required by target video synthesis;
the construction unit is used for constructing a target root view and a target sub view required by the target interaction element in a memory according to the target interaction element; the root view is an abstract example used for identifying the interactive elements in the memory, and the sub-view is an abstract example used for realizing functions required by the interactive elements in the memory;
the synthesizing unit is used for synthesizing the target root view and the target sub view into the target video to generate a synthesized video with the target interaction elements.
9. A computer device, the device comprising a processor and a memory:
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the method of any of claims 1-7 according to instructions in the program code.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium is used to store a computer program for performing the method of any one of claims 1-7.
CN202011494849.0A 2020-12-17 2020-12-17 Interactive element synthesis method and related device Active CN112463017B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011494849.0A CN112463017B (en) 2020-12-17 2020-12-17 Interactive element synthesis method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011494849.0A CN112463017B (en) 2020-12-17 2020-12-17 Interactive element synthesis method and related device

Publications (2)

Publication Number Publication Date
CN112463017A true CN112463017A (en) 2021-03-09
CN112463017B CN112463017B (en) 2021-12-14

Family

ID=74803558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011494849.0A Active CN112463017B (en) 2020-12-17 2020-12-17 Interactive element synthesis method and related device

Country Status (1)

Country Link
CN (1) CN112463017B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102685359A (en) * 2012-05-14 2012-09-19 大道计算机技术(上海)有限公司 Method for realizing hardware-acceleration-based video picture on remote desktop
CN104221385A (en) * 2012-04-16 2014-12-17 高通股份有限公司 View synthesis based on asymmetric texture and depth resolutions
CN105551070A (en) * 2015-12-09 2016-05-04 广州市久邦数码科技有限公司 Camera system capable of loading map elements in real time
CN107491298A (en) * 2017-07-07 2017-12-19 武汉斗鱼网络科技有限公司 A kind of button object automatic scanning method and system
CN107610209A (en) * 2017-08-17 2018-01-19 上海交通大学 Human face countenance synthesis method, device, storage medium and computer equipment
CN109462769A (en) * 2018-10-30 2019-03-12 武汉斗鱼网络科技有限公司 Direct broadcasting room pendant display methods, device, terminal and computer-readable medium
EP3493148A1 (en) * 2017-11-30 2019-06-05 Thomson Licensing View synthesis for unstabilized multi-view video
CN110012352A (en) * 2019-04-17 2019-07-12 广州华多网络科技有限公司 Image special effect processing method, device and net cast terminal
CN110582018A (en) * 2019-09-16 2019-12-17 腾讯科技(深圳)有限公司 Video file processing method, related device and equipment
CN110719525A (en) * 2019-08-28 2020-01-21 咪咕文化科技有限公司 Bullet screen expression package generation method, electronic equipment and readable storage medium
CN111031400A (en) * 2019-11-25 2020-04-17 上海哔哩哔哩科技有限公司 Barrage presenting method and system
CN111617473A (en) * 2020-05-28 2020-09-04 腾讯科技(深圳)有限公司 Display method and device of virtual attack prop, storage medium and electronic equipment
CN111726676A (en) * 2020-07-03 2020-09-29 腾讯科技(深圳)有限公司 Image generation method, display method, device and equipment based on video

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104221385A (en) * 2012-04-16 2014-12-17 高通股份有限公司 View synthesis based on asymmetric texture and depth resolutions
CN102685359A (en) * 2012-05-14 2012-09-19 大道计算机技术(上海)有限公司 Method for realizing hardware-acceleration-based video picture on remote desktop
CN105551070A (en) * 2015-12-09 2016-05-04 广州市久邦数码科技有限公司 Camera system capable of loading map elements in real time
CN107491298A (en) * 2017-07-07 2017-12-19 武汉斗鱼网络科技有限公司 A kind of button object automatic scanning method and system
CN107610209A (en) * 2017-08-17 2018-01-19 上海交通大学 Human face countenance synthesis method, device, storage medium and computer equipment
EP3493148A1 (en) * 2017-11-30 2019-06-05 Thomson Licensing View synthesis for unstabilized multi-view video
CN109462769A (en) * 2018-10-30 2019-03-12 武汉斗鱼网络科技有限公司 Direct broadcasting room pendant display methods, device, terminal and computer-readable medium
CN110012352A (en) * 2019-04-17 2019-07-12 广州华多网络科技有限公司 Image special effect processing method, device and net cast terminal
CN110719525A (en) * 2019-08-28 2020-01-21 咪咕文化科技有限公司 Bullet screen expression package generation method, electronic equipment and readable storage medium
CN110582018A (en) * 2019-09-16 2019-12-17 腾讯科技(深圳)有限公司 Video file processing method, related device and equipment
CN111031400A (en) * 2019-11-25 2020-04-17 上海哔哩哔哩科技有限公司 Barrage presenting method and system
CN111617473A (en) * 2020-05-28 2020-09-04 腾讯科技(深圳)有限公司 Display method and device of virtual attack prop, storage medium and electronic equipment
CN111726676A (en) * 2020-07-03 2020-09-29 腾讯科技(深圳)有限公司 Image generation method, display method, device and equipment based on video

Also Published As

Publication number Publication date
CN112463017B (en) 2021-12-14

Similar Documents

Publication Publication Date Title
CN109525885B (en) Information processing method, information processing device, electronic equipment and computer readable storage medium
US20150007099A1 (en) Pinch Gestures in a Tile-Based User Interface
KR101246988B1 (en) Smooth transitions between animations
CN111770288B (en) Video editing method, device, terminal and storage medium
CN104166514B (en) Information processor, information processing system and method for information display
CN106575396A (en) Quick navigation of message conversation history
CN103793134A (en) Touch screen terminal and multi-interface switching method thereof
CN104216752A (en) Window-based information loading method and device
JP2007526548A (en) Virtual desktop-meta organization and control system
CN110516179A (en) Method for rendering graph, device, electronic equipment and storage medium
JP7140773B2 (en) Live ink presence for real-time collaboration
CN111803953A (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN113630615A (en) Live broadcast room virtual gift display method and device
CN111464430A (en) Dynamic expression display method, dynamic expression creation method and device
WO2024077909A1 (en) Video-based interaction method and apparatus, computer device, and storage medium
CN112035195A (en) Application interface display method and device, electronic equipment and storage medium
CN112631691A (en) Game interface dynamic effect editing method, device, processing equipment and medium
CN112463017B (en) Interactive element synthesis method and related device
WO2024007760A1 (en) Interface display method and apparatus, electronic device, and storage medium
CN107615229A (en) The picture display process of user interface device and user interface device
CN115738292A (en) Friend list display method and device, storage medium and computer equipment
JP2017151491A (en) Image display device, image processing system, image processing method, and image processing program
CN115220613A (en) Event prompt processing method, device, equipment and medium
CN106598453A (en) Method and device for outputting shaped character information
CN115461715A (en) Message display method, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221109

Address after: 100005 No. 69, inner main street, Dongcheng District, Beijing, Jianguomen

Patentee after: AGRICULTURAL BANK OF CHINA

Patentee after: Agricultural Bank of China Financial Technology Co.,Ltd.

Address before: 100005 No. 69, inner main street, Dongcheng District, Beijing, Jianguomen

Patentee before: AGRICULTURAL BANK OF CHINA