CN112929732B - Video processing method and device and computer storage medium - Google Patents

Video processing method and device and computer storage medium Download PDF

Info

Publication number
CN112929732B
CN112929732B CN201911244951.2A CN201911244951A CN112929732B CN 112929732 B CN112929732 B CN 112929732B CN 201911244951 A CN201911244951 A CN 201911244951A CN 112929732 B CN112929732 B CN 112929732B
Authority
CN
China
Prior art keywords
layer
attribute
information
video
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911244951.2A
Other languages
Chinese (zh)
Other versions
CN112929732A (en
Inventor
陈仁健
陈新星
刘志
田卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911244951.2A priority Critical patent/CN112929732B/en
Publication of CN112929732A publication Critical patent/CN112929732A/en
Application granted granted Critical
Publication of CN112929732B publication Critical patent/CN112929732B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Abstract

The application provides a video processing method, a video processing device and a computer storage medium, wherein after a video to be analyzed is obtained, the layer name and the layer annotation of the layer of the video are analyzed, and attribute information recorded in the layer name and/or the layer annotation in advance is obtained; each attribute information records a layer attribute and a corresponding attribute value; adjusting the attribute value of the target attribute of the associated layer of the pre-specified attribute information to match the attribute value of the target attribute with the attribute value recorded by the attribute information; and associating the target attribute of the layer, referring to the layer attribute recorded by the corresponding attribute information, and obtaining the processed video. According to the scheme, the attribute value corresponding to the newly added visual effect can be directly analyzed from the layer name and the layer annotation of the layer of the video, and the layer of the video is correspondingly adjusted. The scheme can automatically process various visual effects without the need of defining a data structure and a method by a user, thereby improving the user experience.

Description

Video processing method and device and computer storage medium
Technical Field
The present application relates to the field of video technologies, and in particular, to a method and an apparatus for processing a video, and a computer storage medium.
Background
Video editing software is a type of software used for video editing, the modification of the visual effect of a video. The user can install the effect plug-in corresponding to the visual effect in the video editing software, and the corresponding visual effect is added in the video by using the effect plug-in.
The effect plug-in generally configures a set of attribute values corresponding to the visual effect for a layer to be edited in the video, and the configured set of attribute values are stored together as a data structure and a video ontology after the video is exported. When the video needs to be played, the video analysis tool can analyze the data structure, the layer attribute of the layer in the video is set to be the attribute value recorded in the data structure, and the adjusted video is output to finish the display of the visual effect.
However, different visual effects correspond to different attribute values, and the data structure for recording the attribute values is also different. In order to allow the video analysis tool to analyze different visual effects, a data structure and an analysis method corresponding to the newly added visual effect need to be predefined in the video analysis tool, which results in poor user experience when playing a video with the newly added visual effect.
Disclosure of Invention
In view of the above problems of the prior art, the present invention provides a video processing method, apparatus and computer storage medium to provide a video processing scheme capable of automatically displaying multiple visual effects.
A first aspect of the present application provides a video processing method, including:
acquiring a video to be analyzed; the video to be analyzed comprises a plurality of image frames, and each image frame comprises at least one image layer; the video to be analyzed comprises at least one target layer, wherein layer information of the target layer is recorded with attribute information, and the layer information refers to any one or combination of a layer name and a layer annotation;
analyzing the layer information of the target layer to obtain attribute information in the layer information; each piece of attribute information records a layer attribute and a corresponding attribute value, and is used for adjusting the attribute value of the layer associated with the attribute information;
for each piece of attribute information, adjusting the attribute value of the layer attribute of the layer associated with the attribute information, so that the adjusted attribute value of the layer attribute is matched with the attribute value recorded by the attribute information; and combining the adjusted layer and the unadjusted layer of the video to be analyzed into the video to be played.
Optionally, analyzing the layer information of the target layer to obtain attribute information in the layer information, including:
identifying characters in character strings to be analyzed one by one, and determining continuous characters between each information starting character and corresponding information ending character in the character strings to be analyzed as effective character strings; wherein the character string to be analyzed refers to the layer name or the layer annotation; in the character string to be analyzed, the first information end character behind each information start character is used as an information end character corresponding to the information start character;
determining effective character strings for representing the layer attributes as attribute character strings;
and determining an attribute value associated with each attribute character string, and determining the layer attribute represented by the attribute character string and the associated attribute value as attribute information.
Optionally, the determining, for each attribute character string, an attribute value associated with the attribute character string includes:
for each attribute character string, determining N numerical values behind the attribute character string as attribute values associated with the attribute character string; wherein, the N refers to the number of attribute values corresponding to the layer attribute represented by the attribute character string.
Optionally, after the adjusting, for each piece of attribute information, an attribute value of a target attribute of an associated layer of the attribute information, the method further includes:
and sequentially outputting the image frames in the video to be played, thereby playing the video to be played.
A second aspect of the present application provides a video processing method, including:
in response to the layer selection operation, determining at least one layer in the video to be edited as a target layer; the video to be edited comprises a plurality of image frames, and each image frame comprises at least one image layer;
outputting a layer information editing interface of each target layer; the layer information editing interface refers to any one or combination of a layer name editing interface and a layer annotation editing interface;
for each target layer, determining attribute information input by a user on a layer information editing interface of the target layer as layer information corresponding to the layer information editing interface; each piece of attribute information comprises a layer attribute and a corresponding attribute value;
and exporting the video to be edited and the layer information of the target layer together into a video to be analyzed.
A third aspect of the present application provides a video processing apparatus, including:
the acquisition unit is used for acquiring a video to be analyzed; the video to be analyzed comprises a plurality of image frames, and each image frame comprises at least one image layer; the video to be analyzed comprises at least one target layer, wherein layer information of the target layer is recorded with attribute information, and the layer information refers to any one or combination of a layer name and a layer annotation;
the analysis unit is used for analyzing the layer information of the target layer to obtain attribute information in the layer information; each piece of attribute information records a layer attribute and a corresponding attribute value, and is used for adjusting the attribute value of the layer associated with the attribute information;
an adjusting unit, configured to adjust, for each piece of attribute information, an attribute value of a layer attribute of a layer associated with the attribute information, so that the adjusted attribute value of the layer attribute matches an attribute value recorded by the attribute information; and combining the adjusted layer and the unadjusted layer of the video to be analyzed into the video to be played.
Optionally, the analyzing unit is configured to, when analyzing the layer information of the target layer to obtain the attribute information in the layer information, specifically:
identifying characters in character strings to be analyzed one by one, and determining continuous characters between each information starting character and corresponding information ending character in the character strings to be analyzed as effective character strings; wherein the character string to be analyzed refers to the layer name or the layer annotation; in the character string to be analyzed, the first information end character behind each information start character is used as an information end character corresponding to the information start character;
determining effective character strings for representing the layer attributes as attribute character strings;
and determining an attribute value associated with each attribute character string, and determining the layer attribute represented by the attribute character string and the associated attribute value as attribute information.
Optionally, when the parsing unit determines, for each attribute character string, an attribute value associated with the attribute character string, the parsing unit is specifically configured to:
for each attribute character string, determining N numerical values behind the attribute character string as attribute values associated with the attribute character string; wherein, the N refers to the number of attribute values corresponding to the layer attribute represented by the attribute character string.
A fourth aspect of the present application provides a video processing apparatus, including:
the determining unit is used for responding to the layer selection operation and determining at least one layer in the video to be edited as a target layer; the video to be edited comprises a plurality of image frames, and each image frame comprises at least one image layer;
the output unit is used for outputting a layer information editing interface of each target layer; the layer information editing interface refers to any one or combination of a layer name editing interface and a layer annotation editing interface;
the determining unit is configured to determine, for each target layer, attribute information input by a user on a layer information editing interface of the target layer as layer information corresponding to the layer information editing interface; each piece of attribute information comprises a layer attribute and a corresponding attribute value;
and the exporting unit is used for exporting the video to be edited and the layer information of the target layer together into the video to be analyzed.
A fifth aspect of the present application provides a computer storage medium for storing a program which, when executed, implements a method of processing video as provided in any one of the first aspects of the present application.
The application provides a video processing method, a video processing device and a computer storage medium, wherein after a video to be analyzed is obtained, a layer name and a layer annotation of a layer of the video are analyzed, and attribute information which is recorded in the layer name and/or the layer annotation in advance is obtained; each attribute information records a layer attribute and a corresponding attribute value; adjusting the attribute value of the target attribute of the associated layer of the pre-specified attribute information to enable the attribute value of the target attribute to be matched with the attribute value recorded by the attribute information; and associating the target attribute of the layer, wherein the target attribute refers to the layer attribute recorded by the corresponding attribute information, and the processed video is obtained. According to the scheme, the attribute value corresponding to the newly added visual effect can be directly analyzed from the layer name and the layer annotation of the layer of the video, and the layer of the video is correspondingly adjusted. The scheme can automatically process various visual effects without the need of defining a data structure and a method by a user, thereby improving the user experience.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present application;
fig. 2 is a flowchart of a method for analyzing attribute information in a character string according to an embodiment of the present application;
fig. 3 is a flowchart of a video processing method according to another embodiment of the present application;
fig. 4 is a schematic diagram of an layer name editing interface according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a layer annotation editing interface according to an embodiment of the present application;
fig. 6 is a flowchart of a video processing method according to yet another embodiment of the present application;
fig. 7 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a video processing apparatus according to another embodiment of the present application;
fig. 9 is a schematic structural diagram of a video processing apparatus according to yet another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A first embodiment of the present application provides a video processing method, which is used for analyzing a video to be analyzed derived by video editing software, so as to obtain a video to be played, where a visual effect added by a designer in the video editing software is displayed.
In the embodiment provided by the application, the execution main body can be an analysis module in video editing software, and can also be a video analysis tool independent of the video editing software.
Referring to fig. 1, the method provided in this embodiment includes the following steps:
s101, obtaining a video to be analyzed.
The video to be analyzed comprises a plurality of image frames, and each image frame comprises at least one image layer. The video to be analyzed comprises at least one target layer.
The video to be analyzed in this embodiment is a video obtained after a user edits the video with video editing software.
The layer is a concept in video editing software, one layer can be understood as a film on which one or more elements (elements include, but are not limited to, text, simple geometric figures and pictures) are displayed, and each layer also has various layer attributes, and the display effect of the layer can be modified by adjusting the attribute value of the layer attribute of one layer.
One or more layers are sequentially overlapped to form an image frame in the video, and a plurality of image frames form a section of video.
Attribute information is recorded in the layer information of the target layer, and the layer information refers to any one or combination of a layer name and a layer annotation. In other words, a plurality of layers exist in the video to be analyzed, wherein any one or two of the layer name and the layer annotation of one part of the layers are recorded with a plurality of pieces of attribute information, and the layer name and the layer annotation of the other part of the layers are not recorded with attribute information.
The specific layer information of which layers is used for recording the attribute information (or the specific layer of the target layer) is determined by the user in the video editing link.
The layer name is a character string used for identifying a corresponding layer and distinguishing the layer from other layers. That is to say, each layer in a video to be analyzed has a unique layer name which is different from the layer names of any other layer.
The layer annotation is also a character string, and is mainly used for recording a supplementary description of a layer by a user (in this embodiment, the user refers to a person who edits a video by using video editing software, and may be understood as a designer). A video to be analyzed may have multiple layers with the same layer annotation, and the layer annotation may be empty, that is, the layer may not have the layer annotation.
It should be noted that both the layer name and the layer annotation can be edited by the user as needed when editing the video with the video editing software. Therefore, when editing a video, a user may record a character string indicating attribute information as a layer name or a layer comment of a target layer, or may record a part of attribute information in the layer name and a part of attribute information in the layer comment.
S102, analyzing the layer information of the target layer to obtain attribute information in the layer information.
Each piece of attribute information records a layer attribute and an attribute value corresponding to the layer attribute.
Each piece of attribute information is associated with at least one layer in advance, and each piece of attribute information is used for adjusting the attribute value of the associated layer.
The layer attributes that may be recorded in the attribute information include, but are not limited to: layer saturation, layer brightness, layer position, layer transparency, layer definition and the like. The layer position refers to a distance between an edge of a layer and an edge of a video playing area when a video is played.
Specifically, the position of the layer may correspond to two attribute values of the transverse distance and the longitudinal distance, if both are 0, the edge of the layer and the edge of the playing area are completely overlapped when the video is played, and if the transverse distance is 3mm and the longitudinal distance is-4 mm, the layer is moved to the left by 3mm from the initial position (i.e., the position where the edge of the layer and the edge of the playing area are completely overlapped) and is moved down by 4mm when the video is played; and if the transverse distance is-3 mm and the longitudinal distance is 4mm, moving the layer from the initial position to the right by 3mm and moving the layer up by 4mm during playing.
The definitions of the saturation, brightness, transparency and definition of the image layer are consistent with those of the saturation, brightness, transparency and definition of the image, and are not repeated herein.
Optionally, one piece of attribute information may be associated with only one layer, that is, the layer name or the layer corresponding to the layer annotation recording the attribute information, or may be associated with multiple layers. The specific association of a piece of attribute information with which layers can be determined in two ways.
On one hand, a plurality of layer names may be recorded at the end of the layer name or layer annotation for recording attribute information, and in this case, the layer name or layer annotation itself, and the layer corresponding to the recorded layer names are the layer associated with the attribute information in the layer name or layer annotation.
For example, if an image frame includes 4 layers, one of which is a target layer, attribute information is recorded in a layer name of the target layer, and a user needs to associate the attribute information with the other 3 layers of the image frame, the layer names of the 4 layers may be set in the following format:
"layer 1", "layer 2", "layer 3", "layer 4{ attribute information }, layer 1, layer 2, layer 3".
It can be seen that the fourth layer is a target layer, and the layer names of the first 3 non-target layers are recorded in the layer name, so that when the attribute information of the fourth layer is analyzed, the fourth layer and the first three layers can be further determined as layers associated with the attribute information.
On the other hand, a user may preset, in an analysis tool for executing the method of this embodiment, which layers each attribute information is associated with, and may determine, by reading information set by the user, the layers to which the attribute information is associated.
S103, adjusting the attribute value of the layer attribute of the layer associated with each piece of attribute information.
And matching the attribute value of the adjusted layer attribute with the attribute value recorded by the attribute information.
Specifically, when step S103 is executed, for any piece of attribute information, the layer associated with the attribute information is determined first, and then, for each layer associated with the attribute information, the attribute value of the target layer attribute of the layer is modified into the attribute value included in the attribute information, where the target layer attribute refers to the layer attribute included in the attribute information.
For example, the layer attribute included in one piece of attribute information a is layer brightness, and the corresponding attribute value (that is, brightness value) is 70, then for one layer associated with the attribute information a, the layer brightness is the target layer attribute, and when the associated layer is adjusted by using the attribute information a, only the layer brightness of the associated layer needs to be adjusted from the current brightness value to the brightness value 70 included in the attribute information a.
And combining the adjusted layer and the unadjusted layer of the video to be analyzed into the video to be played. It should be noted that the combination here means that the adjusted layer and the layer that is not adjusted are combined into a plurality of image frames according to the original combination mode of the video to be analyzed, and the image frames further form the video to be played.
That is to say, for each image frame of the video to be parsed, if a part of layers of the image frame is adjusted and another part of layers is not adjusted, the adjusted part of layers and the part of layers that are not adjusted may be superimposed as an image frame in the video to be played in an original combination manner.
If all layers of the image frame are not associated with attribute information, the image frame is directly used as an image frame of the video to be played.
If all layers of the image frame are adjusted in layer attributes, the adjusted layers obtained after all layers in the image frame are adjusted can be superimposed as an image frame in the video to be played in the original combination mode.
After the video to be played is obtained, the video playing can be finished as long as the image frames are output one by one in sequence. Optionally, the output here may be executed by a video analysis tool that executes the adjustment process, or may be output by another player after the video analysis tool derives the video to be played after analysis.
The adjustment in step S103 may be performed in two ways:
in a first aspect, all layers that need to be adjusted (i.e., layers associated with one or more pieces of attribute information) included in a video to be analyzed may be adjusted, and then the adjusted layers and the layers that are not adjusted are combined to obtain a complete video to be played, and then image frames are output.
In the second aspect, each image frame of the video to be analyzed may also be analyzed one by one, if a certain image frame includes a plurality of layers that need to be adjusted, step S103 is executed, the layers are adjusted by using the attribute information, then the adjusted layer and the non-adjusted layer are combined into one image frame of the video to be played, the image frame is output, then the next image frame is analyzed, and so on.
One of the main requirements of a user in editing a video is to modify a visual effect displayed when the video is played in the video according to the requirements of the user, and the visual effect displayed in one video is generally determined by attribute values of layer attributes of layers of each image frame. In other words, as long as it is ensured that the attribute values of the layer attributes of the partial layers in the output video satisfy the requirements, the corresponding visual effect can be presented.
Therefore, the video processing method provided by the embodiment is used. When a user needs to show a certain visual effect, the user can determine which layer attributes of which layers in the video need to be adjusted by the visual effect, what the adjusted attribute values should be, and the like. In the video editing stage, a user may record layer attributes to be adjusted and adjusted attribute values in layer information of a video in the form of attribute information, and determine a layer to be adjusted by configuring a layer associated with each piece of attribute information.
After the user completes the configuration, the video parsing software can process the video to be parsed by executing the method provided by the embodiment, parse the attribute information set by the user, then adjust the layer attribute of the associated layer by using the attribute information, and when the image frame is output by combining the adjusted layer and the layer which is not adjusted, the visual effect required by the user can be displayed.
The embodiment can directly analyze the attribute information from the layer information and adjust the layer attribute by using the attribute information, thereby showing the specific visual effect. Compared with the existing video editing and analyzing scheme based on effect plug-in implementation, the processing method provided by the embodiment can automatically identify and utilize attribute information corresponding to any visual effect, and does not need a user to predefine different data structures and analyzing methods in a video analyzing tool for different effect plug-ins, so that the method provided by the embodiment can be conveniently used for processing videos to be analyzed with various visual effects, and the user experience is effectively improved.
In a first embodiment of the present application, it is mentioned that the video processing method provided in the present application may parse the attribute information from the layer name and the layer annotation. A specific analysis method is provided below with reference to fig. 2, but it is understood that the method provided in the present application can be applied to various analysis methods including, but not limited to, the following analysis methods.
S201, identifying characters in the character string to be analyzed one by one, and determining continuous characters between each information starting character and the corresponding information ending character in the character string to be analyzed as effective character strings.
Wherein, the character string to be analyzed refers to a layer name or a layer annotation.
The character string to be analyzed comprises a plurality of information starting characters and a plurality of information ending characters, and for each information starting character, the first information ending character behind the information starting character is marked as the information ending character corresponding to the information starting character.
Specifically, which characters are determined as an information start character and an information end character is determined according to an information format when a user edits a layer name and a layer annotation. The user may preset an information format of attribute information in a current video to be analyzed in the video analysis tool, and then the video analysis tool may determine what characters the information start character and the information end character corresponding to the information format are respectively, and then execute step S201.
For example, in an alternative message format, the "open quotation mark" may be determined as the message start character and the "closed quotation mark" may be determined as the message end character.
S202, determining the effective character string used for representing the layer attribute as an attribute character string.
Wherein, the attribute character string refers to a valid character string for representing the attribute of the layer.
Different layer attributes can be represented by different character strings, for example, the layer position can be represented as "location" and the layer brightness can be represented as "light". For each effective character string, only the effective character string needs to be compared with the character string corresponding to each pre-recorded layer attribute, so that whether the effective character string is an attribute character string for representing the layer attribute can be determined, and if the effective character string is the attribute character string, which layer attribute is represented by the attribute character string can be simultaneously determined.
S203, determining the attribute value associated with the attribute character string for each attribute character string.
For any attribute character string, after determining the attribute value associated with the attribute character string, the layer attribute represented by the attribute character string and the attribute value associated with the attribute character string form an attribute message.
Optionally, when editing the layer name, the user may directly input a number of values after the attribute character string as the attribute value of the layer attribute represented by the attribute character string, where the number of the specifically input values is equal to the number of the attribute values that need to be set by the layer attribute. For example, referring to step S102 in the first embodiment of the present application, two attribute values, namely, a lateral distance and a longitudinal distance, need to be set for the layer attribute of the layer position, and only one attribute value representing luminance needs to be set for the layer luminance.
For this input method, when determining the attribute value associated with any one attribute character string, the number (not marked as N) of the attribute values that need to be set for the layer attribute represented by this attribute character string may be determined first, and then the N number values after this attribute character string may be determined as the attribute value associated with this attribute character string directly.
Optionally, the user may also record the attribute value in the layer information in the form of an effective character string, in this case, the number N of the attribute values that need to be set for the layer attribute represented by the attribute character string may be determined first, then the N effective character strings following the attribute character string are determined as the character strings representing the attribute values, and the character strings are converted into numerical values, so that the attribute value associated with the attribute character string may be obtained.
The third embodiment of the present application further provides a method for processing a video, and this embodiment provides a method for a user to edit layer information in a video to be edited to obtain a video to be analyzed, which includes a video and attribute information, so that the user can record attribute information corresponding to a required visual effect in the layer information. The implementation of the method described in this embodiment is equivalent to the video editing link for generating the video to be parsed in the first embodiment of the present application.
The method provided by the present embodiment may be performed by a video editing tool.
Referring to fig. 3, the present embodiment includes the following steps:
s301, responding to the layer selection operation, and determining at least one layer in the video to be edited as a target layer.
Specifically, if the user needs to modify the image saturation of a certain time period of one video and modify the brightness of another time period, the video is the video to be edited.
The target layer refers to a layer specified by a user and requiring to record a plurality of pieces of attribute information in the layer information. In other words, after the user selects a target layer in the video to be edited and writes the attribute information in the layer information, the target layer selected by the user here is equivalent to the target layer mentioned in the first embodiment of the present application.
According to the number of layers required to be adjusted by a user, the user can select one or more layers in the video to be edited as target layers.
S302, outputting a layer information editing interface of each target layer.
The layer information editing interface refers to any one or combination of a layer name editing interface and a layer annotation editing interface.
That is, according to the selection of the user, in step S302, the layer name editing interface may be output, or the layer comment editing interface may be output.
The layer name editing interface and the layer annotation editing interface can be in various forms, fig. 4 is a schematic diagram of an optional layer name editing interface, and fig. 5 is a schematic diagram of an optional layer annotation editing interface.
S303, determining attribute information input by a user on a layer information editing interface of each target layer as layer information corresponding to the layer information editing interface.
Each piece of attribute information comprises a layer attribute and a corresponding attribute value.
The layer information editing interface displays an input box for inputting corresponding layer information, and a user can input a character string corresponding to a layer attribute required to be adjusted and an attribute value required to be set into the layer information editing box in a text form in the input box.
Referring to the accompanying drawings, in the layer name editing interface shown in fig. 4, a layer name input box is in a black rectangular box, and after a user inputs attribute information in the layer name input box in a text form and confirms the attribute information, a program stores the information input in the layer name input box by the user as a layer name of a corresponding layer.
Similarly, in the layer annotation editing interface shown in fig. 5, a layer annotation input box is in a black rectangular box, and after the user inputs attribute information in the layer annotation input box in a text form and confirms the attribute information, the program saves the information input in the layer name input box by the user as the layer annotation of the corresponding layer.
It should be noted that the format of the attribute information input in the layer information editing interface matches the parsing method provided in the second embodiment of the present application.
Specifically, if the set information start character and the set information end character for marking the valid character string are an on quotation mark and an off quotation mark respectively, when attribute information is input here, each input character string representing the layer attribute is placed in a pair of double quotation marks, and an attribute value to be set for the layer attribute is recorded behind each character string representing the attribute information.
And S304, exporting the video to be edited and the layer information of the target layer together into the video to be analyzed.
The specific method for deriving the video and layer information to be edited may refer to the related prior art, and will not be described in detail here.
The derived video to be analyzed can utilize the video processing method provided by the first embodiment of the present application to adjust the layer attributes of a plurality of layers, so as to finally obtain the video to be played, and the video to be played can show the visual effect required by the user after being played.
Based on the method provided by the embodiment, when the user needs to adjust the visual effect of the video to be edited, the user only needs to edit the layer information of some layers in the video to be edited without additionally installing corresponding effect plug-ins, and then the video and the layer information are taken as the video to be analyzed to be exported, so that the display of the specific visual effect can be realized after the video to be analyzed is analyzed and played through the first embodiment of the application.
It should be noted that, the video parsing tool for executing the first embodiment of the present application and the video editing tool for executing the third embodiment of the present application may be different modules of the same video processing software, and correspondingly, in a fourth optional embodiment of the present application, the method for parsing a video according to the first embodiment and the method for editing a video according to the third embodiment may be combined into a method for processing a video, which is used to edit layer information of a video, then adjust layer attributes by using attribute information in the layer information, and finally output an adjusted video, thereby completing the display of a visual effect.
Referring to fig. 6, a fourth embodiment of the present application includes the steps of:
s601, responding to the layer selection operation, and determining at least one layer in the video to be edited as a target layer.
And S602, outputting a layer information editing interface of the target layer aiming at each target layer.
S603, for each target layer, determining attribute information input by a user on a layer information editing interface of the target layer as layer information corresponding to the layer information editing interface.
And S604, exporting the video to be edited and the layer information of the target layer together into a video to be analyzed.
And S605, acquiring the video to be analyzed.
S606, analyzing the layer information of the target layer to obtain attribute information in the layer information.
S607, for each piece of attribute information, adjusting the attribute value of the layer attribute of the layer associated with the attribute information.
In this embodiment, the specific implementation process of steps S601 to S604 may refer to corresponding steps in the third embodiment of the present application, and the specific implementation process of steps S605 to S607 may refer to corresponding steps in the first embodiment of the present application and the second embodiment of the present application, which are not described herein again.
Specifically, the video processing method provided by the embodiment of the application can be used for editing the sticker animation file, exporting the edited sticker animation file, and adjusting and playing the exported sticker animation file.
In a first aspect, a user may edit a layer of a video to be edited by using the method for editing a video provided in the third embodiment of the present application, where the video to be edited may be a sticker animation file. A user may input attribute information to be adjusted in a layer information editing interface, and after all the attribute information input by the user is recorded in layer information (which may be a layer name or a layer annotation) of a video to be edited, a method for processing a video according to the third embodiment of the present application may derive an edited sticker animation file (i.e., a video to be analyzed mentioned in the embodiment of the present application), where the edited sticker animation file includes all layers in the video to be edited and layer information edited by the user and recorded with a plurality of pieces of attribute information input by the user.
The animation file with a sticker can be understood as a file composed of a plurality of layers and layer information, and the file can have a plurality of file formats, for example, the animation file with a sticker can be a file in PAG format or a file in lottiee format.
Specifically, when the video editing software obtains the video to be edited, a user may directly input a sticker animation file as the video to be edited to the video editing software, or input a video of any format as an initial video to the video editing software, and then add a plurality of layers to the initial video by using the video editing software, where the added layers and the input initial video are combined into the video to be edited.
In a second aspect, when the video processing method provided in the first embodiment of the present application is executed, an analysis tool for executing the method reads the edited sticker animation file (i.e., a video to be analyzed), then, according to attribute information recorded in layer information, adjusts layer attributes of layers associated with the attribute information according to the method provided in the first embodiment of the present application, for example, adjust brightness of the layers, adjust saturation of the layers, and the like, after the adjustment of the image is completed, the adjusted layers and other layers that are not adjusted are combined into image frames, and the image frames are sequentially output, so that the video can be played according to a visual effect required by a user.
The PAG (portable Animation Graphics) file format is a self-defined sticker Animation file format provided in the embodiments of the present application, and compared with the existing sticker Animation file format, the PAG format provided in the embodiments of the present application has the following characteristics:
in a first aspect, as described above, the sticker animation file includes a plurality of layers and layer information. The PAG format can store the sticker animation file in the form of a video frame sequence, and meanwhile, the layer of each frame is stored in the form of vector graphics, so that the PAG format can support more characteristics of video editing software (such as special effect creation software Adobe After Effects, referred to as AE for short) such as text layers, mask extension, reverse Alpha mask and the like compared with the existing sticker animation file format, and can support the editing of texts or pictures in the sticker animation file.
In the second aspect, the conventional sticker animation file format (the Lottie format is taken as an example) generally directly stores the layer information in the derived sticker animation file in a text form, and in the PAG file format, the layer information is generally first converted into a binary format and then stored.
In the third aspect, when a file in PAG format is exported, layers and layer information are typically compressed by using a plurality of compression algorithms, and therefore the data size of the exported file in PAG format is smaller than that of the existing animation-on-paper file.
In a fourth aspect, the PAG file format is configured based on the Skia graphics library (a third party library of 2D vector graphics processing functions) that provides an interface (API) to a variety of different operating systems (including but not limited to android, iOS, and windows systems), so that a PAG formatted animation file can achieve consistent video display across a variety of operating system platforms.
Further, the embodiment of the present application further provides a preview tool corresponding to the PAG file format, and the tool can be used to perform overall preview and frame-by-frame preview on a file in the PAG format, so as to better evaluate an exported animation file.
In combination with the methods described in the foregoing embodiments of the present application, the following three embodiments of the present application further provide three video processing apparatuses, respectively.
Referring to fig. 7, a first video processing apparatus according to an embodiment of the present application includes:
an obtaining unit 701 is configured to obtain a video to be analyzed.
The video to be analyzed comprises a plurality of image frames, and each image frame comprises at least one image layer; the video to be analyzed comprises at least one target layer, wherein attribute information is recorded in layer information of the target layer, and the layer information refers to any one or combination of a layer name and a layer annotation.
An analyzing unit 702, configured to analyze the layer information of the target layer to obtain attribute information in the layer information.
Each piece of attribute information records a layer attribute and a corresponding attribute value, and is used for adjusting the attribute value of the layer associated with the attribute information.
The adjusting unit 703 is configured to adjust, for each piece of attribute information, an attribute value of the layer attribute of the layer associated with the attribute information, so that the adjusted attribute value of the layer attribute matches the attribute value recorded by the attribute information.
And combining the adjusted image layer and the unadjusted image layer of the video to be analyzed into the video to be played.
Specifically, the parsing unit 702 is specifically configured to:
identifying characters in the character string to be analyzed one by one, and determining continuous characters between each information starting character and the corresponding information ending character in the character string to be analyzed as effective character strings; wherein, the character string to be analyzed refers to a layer name or a layer annotation; and in the character string to be analyzed, the first information end character behind each information start character is used as the information end character corresponding to the information start character.
And determining the effective character string for representing the layer attribute as the attribute character string.
For each attribute character string, an attribute value associated with the attribute character string is determined, and the layer attribute represented by the attribute character string and the associated attribute value are determined as attribute information.
Specifically, when determining the attribute value associated with each attribute character string, the parsing unit 702 is configured to:
for each attribute character string, determining N numerical values behind the attribute character string as attribute values associated with the attribute character string; wherein, N refers to the number of attribute values corresponding to the layer attribute represented by the attribute character string.
In the processing apparatus for a video provided in this embodiment, the analyzing unit 702 may directly analyze the layer information to obtain the attribute information, and the adjusting unit 703 may directly adjust the layer attribute of the layer in the video by using the attribute information obtained through the analysis, so that the display of the specific visual effect is completed when the video to be played obtained after the adjustment is played. By using the device provided by the embodiment, a user can adjust the visual effect of the video only by editing the layer information of the target layer, and does not need to define a corresponding data structure and an analysis method for the newly added visual effect in a video analysis tool, so that various visual effects can be conveniently set in the video, and the user experience is improved.
Referring to fig. 8, a second video processing apparatus according to an embodiment of the present disclosure includes:
a determining unit 801, configured to determine, in response to an image layer selection operation, at least one image layer in a video to be edited as a target image layer.
The video to be edited comprises a plurality of image frames, and each image frame comprises at least one image layer.
An output unit 802, configured to output, for each target layer, a layer information editing interface of the target layer.
The layer information editing interface refers to any one or combination of a layer name editing interface and a layer annotation editing interface.
The determining unit 801 is further configured to determine, for each target layer, attribute information input by a user on a layer information editing interface of the target layer as layer information corresponding to the layer information editing interface.
Each piece of attribute information comprises a layer attribute and a corresponding attribute value.
And the deriving unit 803 is configured to derive the video to be edited and the layer information of the target layer as a video to be analyzed.
The apparatus provided in this embodiment, through the determining unit 801 and the output unit 802, may directly store the attribute information configured by the user as the layer information of the target layer, and finally the deriving unit 803 derives the video to be edited and the layer information together as the video to be analyzed, and the video to be analyzed may exhibit the visual effect required by the user after being analyzed and played. Based on the device provided by the embodiment, the user can adjust the visual effect of the video without additionally installing the plug-in corresponding to the specific visual effect, and the user experience is effectively improved.
A third video processing apparatus provided in the embodiment of the present application includes processing apparatuses of the first two videos, please refer to fig. 9, the apparatus includes:
a determining unit 901, configured to determine, in response to a layer selection operation, at least one layer in a video to be edited as a target layer.
The video to be edited comprises a plurality of image frames, and each image frame comprises at least one image layer.
An output unit 902, configured to output, for each target layer, a layer information editing interface of the target layer.
The layer information editing interface refers to any one or combination of a layer name editing interface and a layer annotation editing interface.
The determining unit 901 is further configured to determine, for each target layer, attribute information input by a user on a layer information editing interface of the target layer as layer information corresponding to the layer information editing interface.
Each piece of attribute information comprises a layer attribute and a corresponding attribute value.
And the exporting unit 903 is configured to export the video to be edited and the layer information of the target layer together into a video to be analyzed.
An obtaining unit 904 is configured to obtain a video to be parsed.
The video to be analyzed comprises a plurality of image frames, and each image frame comprises at least one image layer; the video to be analyzed comprises at least one target layer, wherein attribute information is recorded in layer information of the target layer, and the layer information refers to any one or combination of a layer name and a layer annotation.
The analyzing unit 905 is configured to analyze the layer information of the target layer to obtain attribute information in the layer information.
Each piece of attribute information records a layer attribute and a corresponding attribute value, and is used for adjusting the attribute value of the layer associated with the attribute information.
An adjusting unit 906, configured to adjust, for each piece of attribute information, an attribute value of the layer attribute of the layer associated with the attribute information, so that the adjusted attribute value of the layer attribute matches the attribute value recorded by the attribute information.
For the video processing apparatus provided in any embodiment of the present application, specific working principles thereof may refer to the video processing method described in the method embodiment of the present application, and details thereof are not repeated here.
The embodiment of the present application further provides a computer storage medium for storing a program, and when the stored program is executed, the computer storage medium is used for implementing the video processing method as described in any embodiment of the present application.
A person skilled in the art can make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A method for processing video, comprising:
acquiring a video to be analyzed; the video to be analyzed comprises a plurality of image frames, and each image frame comprises at least one image layer; the video to be analyzed comprises at least one target layer, wherein attribute information is recorded in layer information of the target layer, and the layer information refers to any one or combination of a layer name and a layer annotation;
identifying characters in character strings to be analyzed one by one, and determining continuous characters between each information starting character and corresponding information ending character in the character strings to be analyzed as effective character strings; the character string to be analyzed refers to a layer name or a layer annotation; in the character string to be analyzed, a first information ending character behind each information starting character is used as an information ending character corresponding to the information starting character;
determining effective character strings for representing the layer attributes as attribute character strings;
determining an attribute value associated with each attribute character string, and determining the layer attribute represented by the attribute character string and the associated attribute value as attribute information, wherein each attribute information records a layer attribute and a corresponding attribute value and is used for adjusting the attribute value of the layer associated with the attribute information;
for each piece of attribute information, adjusting the attribute value of the layer attribute of the layer associated with the attribute information, so that the adjusted attribute value of the layer attribute is matched with the attribute value recorded by the attribute information; and combining the adjusted layer and the unadjusted layer of the video to be analyzed into the video to be played.
2. The processing method according to claim 1, wherein the determining, for each attribute string, the attribute value associated with the attribute string comprises:
for each attribute character string, determining N numerical values behind the attribute character string as attribute values associated with the attribute character string; wherein, the N refers to the number of attribute values corresponding to the layer attribute represented by the attribute character string.
3. The processing method according to any one of claims 1 to 2, wherein after the adjusting, for each piece of attribute information, the attribute value of the target attribute of the associated layer of the attribute information, further comprises:
and sequentially outputting the image frames in the video to be played, thereby playing the video to be played.
4. A method for processing video, comprising:
in response to the layer selection operation, determining at least one layer in the video to be edited as a target layer; the video to be edited comprises a plurality of image frames, and each image frame comprises at least one image layer;
outputting a layer information editing interface of each target layer; the layer information editing interface refers to any one or combination of a layer name editing interface and a layer annotation editing interface;
for each target layer, determining attribute information input by a user on a layer information editing interface of the target layer as layer information corresponding to the layer information editing interface; each piece of attribute information comprises a layer attribute and a corresponding attribute value, the layer attribute represented by an attribute character string and the associated attribute value are determined as one piece of attribute information, the attribute value is the attribute value associated with the attribute character string determined for each attribute character string, an effective character string used for representing the layer attribute is determined as the attribute character string, the effective character string is a continuous character between each information starting character and a corresponding information ending character in a character string to be analyzed, the character string to be analyzed refers to a layer name or a layer annotation, and the first information ending character after each information starting character in the character string to be analyzed is used as the information ending character corresponding to the information starting character;
and exporting the video to be edited and the layer information of the target layer together into a video to be analyzed.
5. An apparatus for processing video, comprising:
the acquisition unit is used for acquiring a video to be analyzed; the video to be analyzed comprises a plurality of image frames, and each image frame comprises at least one image layer; the video to be analyzed comprises at least one target layer, wherein attribute information is recorded in layer information of the target layer, and the layer information refers to any one or combination of a layer name and a layer annotation;
the analysis unit is used for identifying characters in a character string to be analyzed one by one and determining continuous characters between each information starting character and the corresponding information ending character in the character string to be analyzed as effective character strings; the character string to be analyzed refers to a layer name or a layer annotation; in the character string to be analyzed, the first information end character behind each information start character is used as an information end character corresponding to the information start character; determining effective character strings for representing the layer attributes as attribute character strings; determining an attribute value associated with each attribute character string, and determining the layer attribute represented by the attribute character string and the associated attribute value as attribute information, wherein each attribute information records a layer attribute and a corresponding attribute value and is used for adjusting the attribute value of the layer associated with the attribute information;
an adjusting unit, configured to adjust, for each piece of attribute information, an attribute value of a layer attribute of a layer associated with the attribute information, so that the adjusted attribute value of the layer attribute matches an attribute value recorded by the attribute information; and combining the adjusted layer and the unadjusted layer of the video to be analyzed into the video to be played.
6. The processing apparatus according to claim 5, wherein the analysis unit, when determining the attribute value associated with the attribute character string for each attribute character string, is specifically configured to:
for each attribute character string, determining N numerical values behind the attribute character string as attribute values associated with the attribute character string; wherein, the N refers to the number of attribute values corresponding to the layer attribute represented by the attribute character string.
7. An apparatus for processing video, comprising:
the determining unit is used for responding to the layer selection operation and determining at least one layer in the video to be edited as a target layer; the video to be edited comprises a plurality of image frames, and each image frame comprises at least one image layer;
the output unit is used for outputting a layer information editing interface of each target layer; the layer information editing interface refers to any one or combination of a layer name editing interface and a layer annotation editing interface;
the determining unit is configured to determine, for each target layer, attribute information input by a user on a layer information editing interface of the target layer as layer information corresponding to the layer information editing interface; each piece of attribute information comprises a layer attribute and a corresponding attribute value, the layer attribute represented by an attribute character string and the associated attribute value are determined to be one piece of attribute information, the attribute value is the attribute value associated with the attribute character string determined for each attribute character string, an effective character string used for representing the layer attribute is determined to be the attribute character string, the effective character string is a continuous character between each information starting character and a corresponding information ending character in a character string to be analyzed, the character string to be analyzed refers to a layer name or a layer annotation, and a first information ending character behind each information starting character in the character string to be analyzed is used as an information ending character corresponding to the information starting character;
and the exporting unit is used for exporting the video to be edited and the layer information of the target layer together into the video to be analyzed.
8. A computer storage medium storing a program for implementing a video processing method according to any one of claims 1 to 4 when the program is executed.
CN201911244951.2A 2019-12-06 2019-12-06 Video processing method and device and computer storage medium Active CN112929732B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911244951.2A CN112929732B (en) 2019-12-06 2019-12-06 Video processing method and device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911244951.2A CN112929732B (en) 2019-12-06 2019-12-06 Video processing method and device and computer storage medium

Publications (2)

Publication Number Publication Date
CN112929732A CN112929732A (en) 2021-06-08
CN112929732B true CN112929732B (en) 2022-07-08

Family

ID=76162038

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911244951.2A Active CN112929732B (en) 2019-12-06 2019-12-06 Video processing method and device and computer storage medium

Country Status (1)

Country Link
CN (1) CN112929732B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116110072B (en) * 2023-04-12 2023-08-15 江西少科智能建造科技有限公司 CAD drawing analysis method and system

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101690229A (en) * 2007-06-26 2010-03-31 诺基亚公司 System and method for indicating temporal layer switching points
CN101882321A (en) * 2009-05-08 2010-11-10 上海科泰世纪科技有限公司 System and method for rendering animation user interface
CN103678705A (en) * 2013-12-30 2014-03-26 南京大学 Vector data concurrent conversion method from VCT file to shapefile file
CN103986935A (en) * 2014-04-30 2014-08-13 华为技术有限公司 Encoding method, encoder and screen sharing device and system
CN104240179A (en) * 2014-03-04 2014-12-24 深圳深讯和科技有限公司 Layer adjusting method and device for converting 2D image to 3D image
CN105100773A (en) * 2015-07-20 2015-11-25 清华大学 Three-dimensional video manufacturing method, three-dimensional view manufacturing method and manufacturing system
CN105956604A (en) * 2016-04-20 2016-09-21 广东顺德中山大学卡内基梅隆大学国际联合研究院 Action identification method based on two layers of space-time neighborhood characteristics
CN106339224A (en) * 2016-08-24 2017-01-18 北京小米移动软件有限公司 Readability enhancing method and device
CN108259496A (en) * 2018-01-19 2018-07-06 北京市商汤科技开发有限公司 The generation of special efficacy program file packet and special efficacy generation method and device, electronic equipment
CN109151341A (en) * 2018-09-27 2019-01-04 中国船舶重工集团公司第七0九研究所 A kind of embedded platform multi-source HD video fusion realization system and method
CN109636884A (en) * 2018-10-25 2019-04-16 阿里巴巴集团控股有限公司 Animation processing method, device and equipment
CN109801347A (en) * 2019-01-25 2019-05-24 北京字节跳动网络技术有限公司 A kind of generation method, device, equipment and the medium of editable image template

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800052B (en) * 2012-06-13 2014-12-24 浙江大学 Semi-automatic digital method of non-standard map
CN105582672B (en) * 2015-12-23 2019-06-04 厦门光趣投资管理有限公司 A kind of scene of game figure layer display methods and calculate equipment
CN107025676B (en) * 2016-01-25 2021-02-02 阿里巴巴集团控股有限公司 Picture template, picture generation method and related device
CN108304495A (en) * 2018-01-11 2018-07-20 石化盈科信息技术有限责任公司 A kind of implementation method and realization device of the service interface of WFS

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101690229A (en) * 2007-06-26 2010-03-31 诺基亚公司 System and method for indicating temporal layer switching points
CN101882321A (en) * 2009-05-08 2010-11-10 上海科泰世纪科技有限公司 System and method for rendering animation user interface
CN103678705A (en) * 2013-12-30 2014-03-26 南京大学 Vector data concurrent conversion method from VCT file to shapefile file
CN104240179A (en) * 2014-03-04 2014-12-24 深圳深讯和科技有限公司 Layer adjusting method and device for converting 2D image to 3D image
CN103986935A (en) * 2014-04-30 2014-08-13 华为技术有限公司 Encoding method, encoder and screen sharing device and system
CN105100773A (en) * 2015-07-20 2015-11-25 清华大学 Three-dimensional video manufacturing method, three-dimensional view manufacturing method and manufacturing system
CN105956604A (en) * 2016-04-20 2016-09-21 广东顺德中山大学卡内基梅隆大学国际联合研究院 Action identification method based on two layers of space-time neighborhood characteristics
CN106339224A (en) * 2016-08-24 2017-01-18 北京小米移动软件有限公司 Readability enhancing method and device
CN108259496A (en) * 2018-01-19 2018-07-06 北京市商汤科技开发有限公司 The generation of special efficacy program file packet and special efficacy generation method and device, electronic equipment
CN109151341A (en) * 2018-09-27 2019-01-04 中国船舶重工集团公司第七0九研究所 A kind of embedded platform multi-source HD video fusion realization system and method
CN109636884A (en) * 2018-10-25 2019-04-16 阿里巴巴集团控股有限公司 Animation processing method, device and equipment
CN109801347A (en) * 2019-01-25 2019-05-24 北京字节跳动网络技术有限公司 A kind of generation method, device, equipment and the medium of editable image template

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《A Practical Convolutional Neural Network as Loop Filter for Intra Frame》;Xiaodan Song;《2018 25th IEEE International Conference on Image Processing》;20180906;全文 *
《基于OMAP3530的图像编码和视频图像叠加的研究与设计》;曾珂;《中国优秀硕士学位论文全文数据库》;20180615;全文 *
《基于图层分解的视频编解码技术研究》;陈珊莎;《中国优秀硕士学位论文全文数据库》;20140615;全文 *

Also Published As

Publication number Publication date
CN112929732A (en) 2021-06-08

Similar Documents

Publication Publication Date Title
US20220229536A1 (en) Information processing apparatus display control method and program
CN110442822B (en) Method, device, equipment and storage medium for displaying small program content
CN102576561B (en) For the apparatus and method of editing
US11721081B2 (en) Virtual reality experience scriptwriting
US20130019028A1 (en) Workflow system and method for creating, distributing and publishing content
CN108427589B (en) Data processing method and electronic equipment
US8819558B2 (en) Edited information provision device, edited information provision method, program, and recording medium
CN110012358B (en) Examination information processing method and device
CN112004137A (en) Intelligent video creation method and device
KR20160106970A (en) Method and Apparatus for Generating Optimal Template of Digital Signage
CN106681698A (en) Dynamic list generating method and device
CN111885313A (en) Audio and video correction method, device, medium and computing equipment
CN111432290B (en) Video generation method based on audio adjustment
CN112929732B (en) Video processing method and device and computer storage medium
CN114897296A (en) RPA flow labeling method, execution process playback method and storage medium
CN112637520A (en) Dynamic video editing method and system
KR20110012541A (en) Digital story board creation system
CN113411517B (en) Video template generation method and device, electronic equipment and storage medium
KR102300444B1 (en) Document editing device to check whether the font applied to the document is a supported font and operating method thereof
CN115878098A (en) Data processing method, device, equipment and storage medium
CN103534695A (en) Logging events in media files
US8773441B2 (en) System and method for conforming an animated camera to an editorial cut
JP2007318450A (en) Video editing method and device
JP3506087B2 (en) Style and data structure simultaneous creation device
KR101161693B1 (en) Objected, and based on XML CMS with freely editing solution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant