CN114125556B - Video data processing method, terminal and storage medium - Google Patents

Video data processing method, terminal and storage medium Download PDF

Info

Publication number
CN114125556B
CN114125556B CN202111342743.3A CN202111342743A CN114125556B CN 114125556 B CN114125556 B CN 114125556B CN 202111342743 A CN202111342743 A CN 202111342743A CN 114125556 B CN114125556 B CN 114125556B
Authority
CN
China
Prior art keywords
video frame
data
editing
output video
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111342743.3A
Other languages
Chinese (zh)
Other versions
CN114125556A (en
Inventor
王超
黄德安
陈子文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Imyfone Technology Co ltd
Original Assignee
Shenzhen Imyfone Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Imyfone Technology Co ltd filed Critical Shenzhen Imyfone Technology Co ltd
Priority to CN202111342743.3A priority Critical patent/CN114125556B/en
Publication of CN114125556A publication Critical patent/CN114125556A/en
Application granted granted Critical
Publication of CN114125556B publication Critical patent/CN114125556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally

Abstract

The invention discloses a video data processing method, a terminal and a storage medium, and belongs to the technical field of video data processing. The method comprises the following steps: acquiring a first input video frame; editing the first input video frame to obtain first output video frame data, wherein the first output video frame data comprises an editing data set, and the editing data set represents a data set which completes editing in the first input video frame; acquiring a second input video frame; and editing the second input video frame according to the target editing data in the editing data set to obtain second output video frame data. According to the technical scheme, the video frame can be edited by using the target editing data which is already edited, instead of performing basic editing operation on the video frame, and the efficiency of video editing is improved.

Description

Video data processing method, terminal and storage medium
Technical Field
The present disclosure relates to the field of video data processing technologies, and in particular, to a video data processing method, a terminal, and a storage medium.
Background
Thanks to the rapid development of science and technology, people's cultural life becomes rich and colorful. People can work, relax and entertain through video with great expressive force. Similar to editing a photo so that the photo has a stronger expressive force, the expressive force of a video can be increased by editing the video. Video clipping is the process of obtaining video frames from a video file and then editing the video frames. The technical essence of video clipping is the process of data processing of video frames, such as adding font effects, audio synthesis, adding images or changing background, etc.
In the video editing process, a processing method of video data is that video frames are obtained from a video file, and then each video frame is processed by applying a basic function, wherein the basic function is a basic editing function oriented to basic elements such as characters, images, audio and the like in the video frame.
However, each video frame has many basic elements such as text, image, audio, etc., and basic editing modes for the basic elements are very diverse, for example, for text, the font or text size can be adjusted, and font special effects can be added. Editing the base elements in each video frame by the base editing mode takes a long time, resulting in low efficiency of video editing.
Disclosure of Invention
The main purpose of the embodiments of the present application is to provide a video data processing method, a terminal and a storage medium, which aim to implement editing of video frames according to stored target editing data, and improve efficiency of video editing.
To achieve the above object, an embodiment of the present application provides a method for processing video data, including the steps of: acquiring a first input video frame; editing the first input video frame to obtain first output video frame data, wherein the first output video frame data comprises an editing data set, and the editing data set represents a data set which completes editing in the first input video frame; acquiring a second input video frame; and editing the second input video frame according to the target editing data in the editing data set to obtain second output video frame data.
To achieve the above object, an embodiment of the present application further proposes a terminal including a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for implementing a connection communication between the processor and the memory, the program implementing the steps of the aforementioned method when executed by the processor.
To achieve the above object, the present application provides a storage medium for computer-readable storage, the storage medium storing one or more programs executable by one or more processors to implement the steps of the foregoing method.
The method, the terminal and the storage medium for processing the video data comprise the steps of firstly, editing a first input video frame to obtain edited first output video frame data, wherein the first output video frame data comprises an edited data set which is already edited; and after the second input video frame is acquired, editing the second input video frame by using target editing data in the editing data set in the first output video frame data to obtain second output video frame data. Therefore, according to the technical scheme, the video frames can be edited by using the target editing data which is already edited, instead of performing basic editing operation on the video frames, and the efficiency of video editing is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart illustrating a step of a method for processing video data according to an embodiment of the present application;
fig. 2 is a flowchart illustrating another step of the method for processing video data according to the embodiment of the present application;
fig. 3 is a flowchart illustrating another step of the method for processing video data according to the embodiment of the present application;
fig. 4 is a flowchart illustrating another step of the method for processing video data according to the embodiment of the present application;
fig. 5 is a flowchart illustrating another step of the method for processing video data according to the embodiment of the present application;
fig. 6 is a schematic structural diagram of a terminal provided in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
It is to be understood that the terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
In order to edit the basic elements in each video frame in a basic editing mode, a long time is required, and the video editing efficiency is low.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a flowchart illustrating a step of a method for processing video data according to an embodiment of the present application.
S110, acquiring a first input video frame.
When video shooting is completed and video processing is required, obtaining a video frame to be clipped from a video file, and taking the video frame to be clipped as a first input video frame to be input for editing processing.
S120, editing the first input video frame to obtain first output video frame data, wherein the first output video frame data comprises an editing data set.
And editing the first input video frame to obtain a first output video frame after the editing is completed, wherein the first output video frame comprises an editing data set formed by editing effect data after the editing operation is completed.
S130, acquiring a second input video frame.
And acquiring another video frame to be clipped from the video file, and taking the video frame to be clipped as a second input video frame to be input for editing processing.
And S140, editing the second input video frame according to the target editing data in the editing data set to obtain second output video frame data.
When the second input video frame needs to be edited, there is the same operation in the editing operation of the second input video frame as in the editing operation of the first input video frame, which is completed by the target editing data in the editing data set included in the first output video frame data. And editing the second input video frame by using the target editing data to obtain second output video frame data.
It should be noted that, there may be various methods for editing the second input video frame by using the target editing data in the editing data set, one method is to directly obtain the target editing data from the buffer memory after buffering the editing data set in the first output video frame for processing the second input video frame; the other is to store the first output video frame data first and then retrieve the target editing data in the stored database.
Referring to fig. 2, fig. 2 is an embodiment of a method for acquiring target edit data according to first output video frame data by storing the first output video frame data and then searching a database, and the embodiment of fig. 2 further includes steps S150 to S180 based on the embodiment of fig. 1.
S150, storing the first output video frame data in a clip database.
And after the editing of the first input video frame is completed, obtaining first output video frame data comprising a clip data set, and storing the first output video frame data in a clip database.
S160, searching the clip database.
When the editing data stored in the clip database needs to be acquired, the clip database is retrieved.
The first output video frame data is retrieved by setting an identification to the first output video frame data. The identifier may be a serial number or a keyword, for example, a keyword of a certain output video frame data is set as "travel propaganda" and "culture".
S170, when the corresponding search result is matched with the first output video frame data, the first output video frame data is obtained.
And when the result obtained by searching the clipping database is matched with the first output video frame, acquiring the first output video frame data.
S180, acquiring target editing data from an editing data set in the first output video frame data.
The first output video frame data includes an edit data set in which editing is completed, and target edit data for editing the second input video frame is acquired from the edit data set.
It should be noted that, the data structure for storing the first output video frame data may have a plurality of modes, and may be all stored independently, or may be stored by setting an association relationship.
Referring to fig. 3, fig. 3 is an embodiment of storing first output video frame data by setting a data structure having an association relationship. Based on the embodiment shown in fig. 2, the specific implementation of step S150 in the embodiment shown in fig. 3 includes steps S151 to S154.
S151, determining a serial number of the first output video frame data.
When the editing operation on the first input video frame is completed, first output video frame data of an editing data set comprising editing effect data is obtained, and the first output video frame also comprises a first output video frame which is obtained after the first input video frame is edited and is used as a whole video frame. And establishing a corresponding relation between the first output video frame in the first output video frame data and the editing data set, setting a serial number for the whole first output video frame data, and searching the serial number to obtain the first output video frame or the editing data set according to the preset corresponding relation.
S152, determining the region coordinates of each editing data in the editing data set in the first output video frame.
In editing the first input video frame, each editing operation is performed over a range of areas in the first input video frame. Region coordinates are defined for the position of each edit data in the edit data set in the first output video frame, and the region may be a circular region or a rectangular region.
And S153, establishing an association relationship among the serial number, the region coordinates and each editing data.
And corresponding the serial number of the first output video frame data, the region coordinates of each editing data and each editing data, and establishing an association relation. When the sequence number of the first output video frame data is retrieved, a plurality of region coordinates corresponding to the sequence number and one or more editing data in the region coordinates may be retrieved next.
And S154, storing the first output video frame data in a clip database according to the association relation.
After establishing an association relationship of the storage sequence number, the storage region coordinates and each edit data, the first output video frame data is stored in the clip database in accordance with a data structure defined by the association relationship.
For example, two editing data as shown in table 1 are stored in the clip database.
TABLE 1
The edit data stored in table 1 above indicates that, in the output video frame 10001, in a rectangular region with region coordinates { left:120, top:140, right:250, bottom:160}, there is one piece of completed text edit data, the content of text in the text edit data is "company", the special effect style of text is a, the size of font is 16, and the color of font is red; in the output video frame 10001, in a rectangular region with region coordinates { left:220, top:240, right:250, bottom:260}, there is also one piece of picture editing data, the picture data is an image Q, and the special effect pattern of the image Q is B.
It should be noted that, there are various hardware storage modes for storing the first output video frame data in the clip database according to the association relationship, one is that the JSON format locally stores the first output video frame data; the other is to store the first output video frame data in a network database.
In step S180, there are various ways to obtain the target edit data from the edit data set, one is to set an identifier for each edit data for distinguishing each edit data, and directly obtain the target edit data according to the identifier of each edit data; and the other is to convert the editing data set corresponding to the region coordinates into video frames, and intuitively select target editing data according to the video frames and acquire the target editing data.
Referring to fig. 4, fig. 4 is an embodiment of a method for converting an edit data set corresponding to a storage area coordinate into a video frame and obtaining target edit data according to the video frame, in a plurality of ways for obtaining target edit data from the edit data set according to the storage area coordinate, and step S181 and step 182 in fig. 4 are a specific implementation manner of step S180 in fig. 3.
S181, converting the editing data set into a target video frame according to the storage area coordinates.
And after the first output video frame data is acquired, acquiring an edit data set formed by all the edit completion data in a certain region coordinate, and converting the corresponding edit data set in the region corresponding to the region coordinate into a target video frame displayed in a video frame mode by taking the region coordinate as a unit.
S182, acquiring target editing data from the target video frame.
After converting the edit data set into the target video frame, the target edit data is acquired from the edit data displayed in the target video frame.
Referring to fig. 5, step S1811 and step S1812 are one specific implementation manner of step S181; when the step S181 employs the implementation manner of the step S1811 and the step S1812, the step S182 may employ the step S1821 as a specific implementation manner.
S13711, a function interface set is set for the edit data set.
Setting a callable function interface for a plurality of editing data in the editing data set, and forming a function interface set by a plurality of function interfaces for calling the editing data.
S13712, converting at least one edit data into at least one video frame data according to each functional interface.
At least one function interface in the function interface set corresponding to the edit data set, and at least one edit data corresponding to the at least one function interface are converted into video frame data for display.
And S13721, acquiring target editing data from the target video frames according to the functional interface set.
When the target editing data to be acquired is determined from the target video frame, a plurality of function interfaces corresponding to the target editing data are called from the function interface set, and the target editing data are acquired.
In combination with the above description, the editing data operation in the embodiment of the present application includes adding text, adding pictures, adding special effects or adding audio, etc.; the edited data set obtained after the editing is completed includes at least one of a text box, a layer, an animation, and audio.
It should be noted that, for convenience and brevity of description, specific working processes of the data migration apparatus and each module described above may refer to corresponding processes in the foregoing data migration method embodiment, which is not described herein again.
Referring to fig. 6, the terminal includes a processor, a memory, and a network interface connected by a system bus, wherein the memory may include a non-volatile storage medium and an internal memory.
The non-volatile storage medium may store an operating system and a computer program. The computer program comprises program instructions that, when executed, cause a processor to perform any of a number of video data processing methods.
The processor is used to provide computing and control capabilities to support the operation of the entire computer device.
The internal memory provides an environment for the execution of a computer program in a non-volatile storage medium that, when executed by a processor, causes the processor to perform any of a number of video data processing methods.
The network interface is used for network communication such as transmitting assigned tasks and the like. It will be appreciated by those skilled in the art that the structure shown in fig. 6 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
It should be appreciated that the processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Illustratively, in some embodiments, the processor is configured to execute a computer program stored in the memory to implement the steps of:
acquiring a first input video frame;
editing the first input video frame to obtain first output video frame data, wherein the first output video frame data comprises an editing data set, and the editing data set represents a data set which completes editing in the first input video frame;
acquiring a second input video frame;
and editing the second input video frame according to the target editing data in the editing data set to obtain second output video frame data.
In some embodiments, the processor is configured to execute a computer program stored in the memory to implement the steps of:
storing the first output video frame data in a clip database;
retrieving the clip database;
when the retrieval result corresponding to the retrieval is matched with the first output video frame data, acquiring the first output video frame data;
the target edit data is obtained from an edit data set in the first output video frame data.
In some embodiments, the processor is configured to execute a computer program stored in the memory to implement the steps of:
determining a sequence number of the first output video frame data;
determining region coordinates of each edit data in the edit data set in the first output video frame;
establishing an association relationship among the serial number, the region coordinates and each editing data;
and storing the first output video frame data in the clip database according to the association relation.
In some embodiments, the processor is configured to execute a computer program stored in the memory to implement the steps of:
converting the editing data set into a target video frame according to the region coordinates;
and acquiring the target editing data from the target video frame.
In some embodiments, the processor is configured to execute a computer program stored in the memory to implement the steps of:
setting a function interface set for the editing data set, wherein each function interface is used for selecting and calling at least one editing data;
converting the at least one edit data into at least one video frame data according to each functional interface;
and converting the at least one video frame data into the target video frame according to the region coordinates.
In some embodiments, the processor is configured to execute a computer program stored in the memory to implement the steps of:
and acquiring the target editing data from the target video frame according to the functional interface set.
Embodiments of the present application further provide a computer readable storage medium, where the computer readable storage medium stores a computer program, where the computer program includes program instructions, and the processor executes the program instructions to implement a method for processing any video data provided in the embodiments of the present application.
The computer readable storage medium may be an internal storage unit of the computer device according to the foregoing embodiment, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, which are provided on the computer device.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (5)

1. A method of processing video data, the method comprising the steps of:
acquiring a first input video frame;
editing the first input video frame to obtain first output video frame data, wherein the first output video frame data comprises a first output video frame and an editing data set corresponding to the first output video frame, and the editing data set represents a data set which is edited in the first input video frame;
acquiring a second input video frame;
determining a sequence number of the first output video frame data;
determining region coordinates of each edit data in the edit data set in the first output video frame;
establishing an association relationship among the serial number, the region coordinates and each editing data;
storing the first output video frame data in a clip database according to the association relationship;
retrieving the clip database according to the second input video frame;
when the retrieval result corresponding to the retrieval is matched with the first output video frame data, acquiring the first output video frame data;
setting a function interface set for the editing data set, wherein each function interface is used for selecting and calling at least one editing data;
converting the at least one edit data into at least one video frame data according to each functional interface;
converting the at least one video frame data into a target video frame according to the region coordinates;
acquiring target editing data from the target video frame according to the functional interface set;
and editing the second input video frame according to the target editing data in the editing data set to obtain second output video frame data.
2. The method of processing video data according to claim 1, wherein the edit data set includes at least one of a text box, a layer, an animation, and audio.
3. A processing apparatus for video data, comprising:
the first acquisition module is used for acquiring a first input video frame;
the first editing module is used for editing the first input video frame to obtain first output video frame data, wherein the first output video frame data comprises a first output video frame and an editing data set corresponding to the first output video frame, and the editing data set represents a data set which is edited in the first input video frame;
the second acquisition module is used for acquiring a second input video frame;
the processing device is further configured to determine a sequence number of the first output video frame data; determining region coordinates of each edit data in the edit data set in the first output video frame; establishing an association relationship among the serial number, the region coordinates and each editing data; storing the first output video frame data in a clip database according to the association relationship; retrieving the clip database according to the second input video frame; when the retrieval result corresponding to the retrieval is matched with the first output video frame data, acquiring the first output video frame data; setting a function interface set for the editing data set, wherein each function interface is used for selecting and calling at least one editing data; converting the at least one edit data into at least one video frame data according to each functional interface; converting the at least one video frame data into a target video frame according to the region coordinates; acquiring target editing data from the target video frame according to the functional interface set;
and the second editing module is used for editing the second input video frame according to the target editing data in the editing data set to obtain second output video frame data.
4. A terminal comprising a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for enabling a connection communication between the processor and the memory, the program when executed by the processor implementing the steps of the method of processing video data according to any one of claims 1 to 2.
5. A storage medium for computer-readable storage, characterized in that the storage medium stores one or more programs executable by one or more processors to implement the steps of the method of processing video data of any one of claims 1 to 2.
CN202111342743.3A 2021-11-12 2021-11-12 Video data processing method, terminal and storage medium Active CN114125556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111342743.3A CN114125556B (en) 2021-11-12 2021-11-12 Video data processing method, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111342743.3A CN114125556B (en) 2021-11-12 2021-11-12 Video data processing method, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN114125556A CN114125556A (en) 2022-03-01
CN114125556B true CN114125556B (en) 2024-03-26

Family

ID=80379744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111342743.3A Active CN114125556B (en) 2021-11-12 2021-11-12 Video data processing method, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN114125556B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107484008A (en) * 2017-09-07 2017-12-15 北京奇虎科技有限公司 A kind of video editing and sharing method, device, electronic equipment and medium
US10217488B1 (en) * 2017-12-15 2019-02-26 Snap Inc. Spherical video editing
CN109429078A (en) * 2017-08-24 2019-03-05 北京搜狗科技发展有限公司 Method for processing video frequency and device, for the device of video processing
CN110798735A (en) * 2019-08-28 2020-02-14 腾讯科技(深圳)有限公司 Video processing method and device and electronic equipment
CN111243632A (en) * 2020-01-02 2020-06-05 北京达佳互联信息技术有限公司 Multimedia resource generation method, device, equipment and storage medium
CN111475676A (en) * 2020-04-07 2020-07-31 深圳市超高清科技有限公司 Video data processing method, system, device, equipment and readable storage medium
KR20200098467A (en) * 2020-08-12 2020-08-20 신윤성 Information media for each region having a structure that is compatible with each other by linking use between a plurality of (two or more) information media with different characteristics, and an editing method and information providing method thereof.
CN111899155A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999031663A1 (en) * 1997-12-17 1999-06-24 Sony Corporation Device for generating editing list, method and device for editing
JP2001292408A (en) * 2000-04-07 2001-10-19 Sony Corp Video editing device, video editing method, vide editing system, and computer-readable recording medium recorded with video editing program
AU2002350949A1 (en) * 2001-06-25 2003-01-08 Redhawk Vision Inc. Video event capture, storage and processing method and apparatus
CN108040288B (en) * 2017-12-20 2019-02-22 北京达佳互联信息技术有限公司 Video editing method, device and intelligent mobile terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109429078A (en) * 2017-08-24 2019-03-05 北京搜狗科技发展有限公司 Method for processing video frequency and device, for the device of video processing
CN107484008A (en) * 2017-09-07 2017-12-15 北京奇虎科技有限公司 A kind of video editing and sharing method, device, electronic equipment and medium
US10217488B1 (en) * 2017-12-15 2019-02-26 Snap Inc. Spherical video editing
CN110798735A (en) * 2019-08-28 2020-02-14 腾讯科技(深圳)有限公司 Video processing method and device and electronic equipment
CN111243632A (en) * 2020-01-02 2020-06-05 北京达佳互联信息技术有限公司 Multimedia resource generation method, device, equipment and storage medium
CN111475676A (en) * 2020-04-07 2020-07-31 深圳市超高清科技有限公司 Video data processing method, system, device, equipment and readable storage medium
CN111899155A (en) * 2020-06-29 2020-11-06 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
KR20200098467A (en) * 2020-08-12 2020-08-20 신윤성 Information media for each region having a structure that is compatible with each other by linking use between a plurality of (two or more) information media with different characteristics, and an editing method and information providing method thereof.

Also Published As

Publication number Publication date
CN114125556A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
US9972113B2 (en) Computer-readable recording medium having stored therein album producing program, album producing method, and album producing device for generating an album using captured images
US20230162419A1 (en) Picture processing method and apparatus, device, and storage medium
CN109271587B (en) Page generation method and device
CN107748615B (en) Screen control method and device, storage medium and electronic equipment
CN111209422A (en) Image display method, image display device, electronic device, and storage medium
CN110363837B (en) Method and device for processing texture image in game, electronic equipment and storage medium
CN114125556B (en) Video data processing method, terminal and storage medium
CN113538502A (en) Picture clipping method and device, electronic equipment and storage medium
CN101257558A (en) Mosaic process for digital camera as well as method for reducing mosaic process
CN113840099B (en) Video processing method, device, equipment and computer readable storage medium
CN114331808A (en) Action posture storage method, device, medium and electronic equipment
CN115731111A (en) Image data processing device and method, and electronic device
CN114125555B (en) Editing data preview method, terminal and storage medium
CN112269957A (en) Picture processing method, device, equipment and storage medium
US20130155087A1 (en) System and method for configuring graphics register data and recording medium
CN115665335B (en) Image processing method, image processing apparatus, image forming apparatus, and medium
CN114758339B (en) Method and device for acquiring character recognition model, computer equipment and storage medium
CN113435454B (en) Data processing method, device and equipment
CN111179388B (en) Cartoon editing method and terminal based on 3D scene
US20240104811A1 (en) Image editing method and device
JP2009015774A (en) Information processing unit and information processing method
CN112463280A (en) Image generation method and device, electronic equipment and computer readable storage medium
CN116503535A (en) Image rendering method, device, electronic equipment and storage medium
WO2024045787A1 (en) Pick operand detection method and apparatus, computer device, readable storage medium, and computer program product
CN109922291B (en) GIF generation method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant