CN111741365A - Video composition data processing method, system, device and storage medium - Google Patents
Video composition data processing method, system, device and storage medium Download PDFInfo
- Publication number
- CN111741365A CN111741365A CN202010410286.6A CN202010410286A CN111741365A CN 111741365 A CN111741365 A CN 111741365A CN 202010410286 A CN202010410286 A CN 202010410286A CN 111741365 A CN111741365 A CN 111741365A
- Authority
- CN
- China
- Prior art keywords
- identification code
- playing time
- video
- time node
- audio
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 12
- 239000000203 mixture Substances 0.000 title claims description 19
- 230000000694 effects Effects 0.000 claims abstract description 74
- 230000033764 rhythmic process Effects 0.000 claims abstract description 32
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 25
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 25
- 238000000034 method Methods 0.000 claims abstract description 24
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 10
- 239000002131 composite material Substances 0.000 claims description 7
- 230000006870 function Effects 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/233—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Television Signal Processing For Recording (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention discloses a video synthesis data processing method, a system, a device and a storage medium, wherein the method comprises the following steps: receiving a special effect type and a plurality of pictures uploaded by a first terminal; acquiring a click document according to the special effect type, wherein the click document comprises a first identification code of an audio rhythm point, a second identification code of an audio playing time node, a third identification code of a picture playing time node and a synthesis effect code; acquiring target audio data according to the first identification code; and synthesizing the target audio data and the plurality of pictures into a special effect video according to the second identification code, the third identification code and the synthesis effect code. The invention can enable non-professional video editors to realize the synthesis operation of the special effect video only by uploading the selected special effect type and a plurality of pictures on the first terminal, thereby quickly finishing the video editing content. The invention can be widely applied to the technical field of video data processing.
Description
Technical Field
The present invention relates to the field of video data processing technologies, and in particular, to a method, a system, an apparatus, and a storage medium for processing video composite data.
Background
With the rapid development of audio and video technology, people are not satisfied with monotonous word social contact. From the development of characters to dynamic images to audio and video, materials used by people in social contact are gradually enriched, and video materials provided by audio and video are favored by more and more people. At present, numerous APP for processing audio and video are used for solving the material problem of users, and popular music can be replaced by adding special effects to video so as to meet the needs of most people. However, the existing audio/video processing software usually needs professional personnel to complete the video editing quickly, and non-professional personnel cannot complete the video editing quickly.
Disclosure of Invention
To solve the above technical problems, the present invention aims to: a video composition data processing method, system, apparatus and storage medium are provided which enable a non-professional video editor to quickly complete video editing content to a certain extent.
A first aspect of an embodiment of the present invention provides:
a video composition data processing method, comprising the steps of:
receiving a special effect type and a plurality of pictures uploaded by a first terminal;
acquiring a click document according to the special effect type, wherein the click document comprises a first identification code of an audio rhythm point, a second identification code of an audio playing time node, a third identification code of a picture playing time node and a synthesis effect code;
acquiring target audio data according to the first identification code;
and synthesizing the target audio data and the plurality of pictures into a special effect video according to the second identification code, the third identification code and the synthesis effect code.
Further, the step of generating the checkpoint document comprises:
acquiring audio data to be processed;
carrying out frame processing on the audio data to be processed;
acquiring a plurality of audio rhythm points in the audio data to be processed after frame processing;
setting first identification codes corresponding to the plurality of audio rhythm points;
and generating a card point document according to the first identification code.
Further, before the generating of the checkpoint document according to the first identification code, the method further includes the following steps:
acquiring a preset audio playing time node and a preset picture playing time node;
and setting a second identification code corresponding to the audio playing time node and a third identification code corresponding to the picture playing time node.
Further, the setting of the first identification codes corresponding to the plurality of audio rhythm points includes:
acquiring target audio rhythm points from the plurality of audio rhythm points;
and setting a first identification code corresponding to the target audio rhythm point.
Further, the synthesizing the target audio data and the plurality of pictures into the special effect video according to the second identification code, the third identification code and the composite effect code includes:
acquiring an audio playing time node according to the second identification code, and identifying a picture playing time node according to the third identification code;
acquiring a playing time axis of the special-effect video;
matching the audio playing time node and the picture playing time node on the playing time axis;
setting the target audio data on the playing time axis according to the matching result of the audio playing time node;
setting the plurality of pictures on the playing time axis according to the matching result of the picture playing time node;
and generating a special effect video according to the setting result of the target audio data and the setting results of the plurality of pictures.
Further, a fourth identification code of the transparency change rule is also contained in the checkpoint document; generating a special effect video according to the setting result of the target audio data and the setting results of the plurality of pictures, including:
acquiring a transparency change rule according to the fourth identification code;
and generating a special effect video according to the transparency change rule, the setting result of the target audio data and the setting results of the plurality of pictures.
Further, the playing time axis includes a plurality of coordinate data.
A second aspect of an embodiment of the present invention provides:
a video composition data processing system, comprising:
the receiving module is used for receiving the special effect type and the multiple pictures uploaded by the first terminal;
the first obtaining module is used for obtaining a click document according to the special effect type, wherein the click document comprises a first identification code of an audio rhythm point, a second identification code of an audio playing time node, a third identification code of a picture playing time node and a synthesis effect code;
the second acquisition module is used for acquiring target audio data according to the first identification code;
and the synthesis module is used for synthesizing the target audio data and the plurality of pictures into a special effect video according to the second identification code, the third identification code and the synthesis effect code.
A third aspect of embodiments of the present invention provides:
a video composition data processing apparatus comprising:
at least one memory for storing a program;
at least one processor for loading the program to perform the video composition data processing method.
A fourth aspect of an embodiment of the present invention provides:
a computer readable storage medium having stored therein processor-executable instructions, which when executed by a processor, are for implementing a video composition data processing method as described.
The embodiment of the invention has the beneficial effects that: according to the embodiment of the invention, the first identification code of the audio rhythm point, the second identification code of the audio playing time node, the third identification code of the picture playing time node and the click document of the synthesis effect code are acquired through the special effect type uploaded by the first terminal, then the target audio data is acquired according to the first identification code, and then the target audio data and the multiple pictures are synthesized into the special effect video according to the second identification code, the third identification code and the synthesis effect code, so that a non-professional video editor can realize the synthesis operation of the special effect video only by uploading the selected special effect type and the multiple pictures on the first terminal, and the video editing content is quickly completed.
Drawings
Fig. 1 is a flowchart of a video composition data processing method according to an embodiment of the invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Referring to fig. 1, an embodiment of the present invention provides a video composition data processing method, and the present invention is applied to a server. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, and a big data and artificial intelligence platform. The server may communicate with a plurality of terminal devices. Video editing software can be installed in the terminal.
The present embodiment includes steps S11-S14:
s11, receiving the special effect type and the multiple pictures uploaded by the first terminal; the first terminal is a smart phone, a tablet computer, a notebook computer or a desktop computer and is used for uploading the special effect type and the multiple pictures selected by the user to the server. The special effect type can be a corresponding special effect type obtained by matching the character information through a server after the character information is input on a display interface of the first terminal by a video editor; and after the video editor selects a specific type option on the display interface of the first terminal, the server obtains a corresponding special effect type according to the type option matching.
S12, obtaining a click document according to the special effect type, wherein the click document comprises a first identification code of an audio rhythm point, a second identification code of an audio playing time node, a third identification code of a picture playing time node and a composite effect code; the first identification code, the second identification code and the composite effect code are characters which can be identified by a server for controlling the video editing software, and the function of the first identification code, the second identification code and the composite effect code is that the video editing software can accurately identify corresponding information.
In some embodiments, the checkpoint document is generated by:
acquiring audio data to be processed; the audio data to be processed is stored in the database and is used for providing audio materials for the subsequent video clipping process.
Carrying out frame processing on the audio data to be processed; the frame processing means that audio data to be processed is decomposed into data of one frame and one frame, so that the type corresponding to the rhythm point of the data of each frame can be conveniently analyzed.
Acquiring a plurality of audio rhythm points in the audio data to be processed after frame processing;
setting first identification codes corresponding to the plurality of audio rhythm points;
and generating a card point document according to the first identification code. I.e. the first identification code is stored in the checkpoint document.
According to the embodiment, the first identification code is arranged in the checkpoint document, so that the corresponding audio data can be conveniently extracted in the video editing process, and the video editing process is accelerated.
In some embodiments, before the generating the checkpoint document according to the first identification code, the method further comprises the following steps:
acquiring a preset audio playing time node and a preset picture playing time node; the audio playing time node comprises an audio playing starting node and an audio playing ending node. The picture playing time node comprises a starting node and an ending node of picture playing. And determining the audio playing time length and the playing time point of the picture playing time length through the starting node and the ending node.
And setting a second identification code corresponding to the audio playing time node and a third identification code corresponding to the picture playing time node, and storing the second identification code and the third identification code in a checkpoint document, so that the corresponding audio playing time node and the picture playing time node can be quickly acquired in the video editing process.
In some embodiments, the setting of the first identification codes corresponding to the plurality of audio rhythm points may be specifically implemented by the following steps:
acquiring target audio rhythm points from the plurality of audio rhythm points; the target audio rhythm point may be a rhythm point having a certain characteristic within a plurality of audio rhythm points, such as a point changing from high pitch to low pitch, a point changing from slow to fast.
And setting a first identification code corresponding to the target audio rhythm point.
According to the implementation, the corresponding first identification codes are set only for the target rhythm points, so that the storage capacity in the database is reduced, and the workload of the server is reduced.
S13, acquiring target audio data according to the first identification code; the target audio data may be pre-stored in a database corresponding to the server. The database stores a plurality of types of audio data. In the step, the target audio data of the corresponding type is obtained from the database through matching of the first identification code. The target audio data may be one audio data, or may be a plurality of audio data of the same type, or may be a plurality of audio data of different types.
S14, synthesizing the target audio data and the multiple pictures into a special effect video according to the second identification code, the third identification code and the synthesis effect coding.
In some embodiments, said synthesizing the target audio data and the plurality of pictures into a special effects video according to the second identification code, the third identification code, and the composite effect encoding comprises:
acquiring an audio playing time node according to the second identification code, and identifying a picture playing time node according to the third identification code;
acquiring a playing time axis of the special-effect video; in some embodiments, the playback time axis includes a plurality of coordinate data.
Matching the audio playing time node and the picture playing time node on the playing time axis; specifically, an audio playing time period and a picture playing time period are determined according to coordinate data of a playing time node on a playing time axis. The audio playing time period and the picture playing time period may have an overlapping portion or may be completely independent.
Setting the target audio data on the playing time axis according to the matching result of the audio playing time node;
setting the plurality of pictures on the playing time axis according to the matching result of the picture playing time node;
and generating a special effect video according to the setting result of the target audio data and the setting results of the plurality of pictures. Specifically, after target audio data and a plurality of pictures are set on a playing time axis, video editing software is controlled to complete the synthesis operation of the special-effect video so as to output the special-effect video.
In some embodiments, a fourth identification code of a transparency change rule is also contained in the checkpoint document; generating a special effect video according to the setting result of the target audio data and the setting results of the plurality of pictures, including:
acquiring a transparency change rule according to the fourth identification code; the fourth identification code is a character which can be identified by a server for controlling the video editing software, and has the function of enabling the server to accurately identify the transparency change rule in the current video editing process.
And generating a special effect video according to the transparency change rule, the setting result of the target audio data and the setting results of the plurality of pictures.
According to the embodiment, the special effect video is generated by increasing the transparency change rule, so that the effect of the special effect video is closer to the actual situation.
To sum up, in the above embodiment, the first identification code of the audio rhythm point, the second identification code of the audio playing time node, the third identification code of the picture playing time node, and the click document of the synthesis effect code are obtained through the special effect type uploaded by the first terminal, then the target audio data is obtained according to the first identification code, and then the target audio data and the multiple pictures are synthesized into the special effect video according to the second identification code, the third identification code, and the synthesis effect code, so that a non-professional video editor can realize the synthesis operation of the special effect video only by uploading the selected special effect type and the multiple pictures at the first terminal, and the video editing content is quickly completed.
An embodiment of the present invention provides a video composition data processing system corresponding to the method in fig. 1, including:
the receiving module is used for receiving the special effect type and the multiple pictures uploaded by the first terminal;
the first obtaining module is used for obtaining a click document according to the special effect type, wherein the click document comprises a first identification code of an audio rhythm point, a second identification code of an audio playing time node, a third identification code of a picture playing time node and a synthesis effect code;
the second acquisition module is used for acquiring target audio data according to the first identification code;
and the synthesis module is used for synthesizing the target audio data and the plurality of pictures into a special effect video according to the second identification code, the third identification code and the synthesis effect code.
The content of the embodiment of the method of the invention is all applicable to the embodiment of the system, the function of the embodiment of the system is the same as the embodiment of the method, and the beneficial effect achieved by the embodiment of the system is the same as the beneficial effect achieved by the method.
An embodiment of the present invention provides a video composition data processing apparatus, including:
at least one memory for storing a program;
at least one processor for loading the program to perform the video composition data processing method.
The content of the method embodiment of the present invention is applicable to the apparatus embodiment, the functions specifically implemented by the apparatus embodiment are the same as those of the method embodiment, and the beneficial effects achieved by the apparatus embodiment are also the same as those achieved by the method.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, in which processor-executable instructions are stored, and when the processor-executable instructions are executed by a processor, the processor-executable instructions are used to implement the video composition data processing method.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A method for processing video composition data, comprising the steps of:
receiving a special effect type and a plurality of pictures uploaded by a first terminal;
acquiring a click document according to the special effect type, wherein the click document comprises a first identification code of an audio rhythm point, a second identification code of an audio playing time node, a third identification code of a picture playing time node and a synthesis effect code;
acquiring target audio data according to the first identification code;
and synthesizing the target audio data and the plurality of pictures into a special effect video according to the second identification code, the third identification code and the synthesis effect code.
2. The method of claim 1, wherein the step of generating the checkpoint document comprises:
acquiring audio data to be processed;
carrying out frame processing on the audio data to be processed;
acquiring a plurality of audio rhythm points in the audio data to be processed after frame processing;
setting first identification codes corresponding to the plurality of audio rhythm points;
and generating a card point document according to the first identification code.
3. The method of claim 2, further comprising, before generating the checkpoint document based on the first identification code, the steps of:
acquiring a preset audio playing time node and a preset picture playing time node;
and setting a second identification code corresponding to the audio playing time node and a third identification code corresponding to the picture playing time node.
4. The method as claimed in claim 2, wherein said setting the first identification codes corresponding to the audio rhythm points comprises:
acquiring target audio rhythm points from the plurality of audio rhythm points;
and setting a first identification code corresponding to the target audio rhythm point.
5. The method according to claim 1, wherein said synthesizing the target audio data and the plurality of pictures into the special effect video according to the second identification code, the third identification code, and the composite effect coding comprises:
acquiring an audio playing time node according to the second identification code, and identifying a picture playing time node according to the third identification code;
acquiring a playing time axis of the special-effect video;
matching the audio playing time node and the picture playing time node on the playing time axis;
setting the target audio data on the playing time axis according to the matching result of the audio playing time node;
setting the plurality of pictures on the playing time axis according to the matching result of the picture playing time node;
and generating a special effect video according to the setting result of the target audio data and the setting results of the plurality of pictures.
6. The method of claim 5, wherein said checkpoint document further includes a fourth identification code indicating a transparency change rule; generating a special effect video according to the setting result of the target audio data and the setting results of the plurality of pictures, including:
acquiring a transparency change rule according to the fourth identification code;
and generating a special effect video according to the transparency change rule, the setting result of the target audio data and the setting results of the plurality of pictures.
7. The method according to any one of claims 5 to 6, wherein said playback time axis includes a plurality of coordinate data.
8. A video composition data processing system, comprising:
the receiving module is used for receiving the special effect type and the multiple pictures uploaded by the first terminal;
the first obtaining module is used for obtaining a click document according to the special effect type, wherein the click document comprises a first identification code of an audio rhythm point, a second identification code of an audio playing time node, a third identification code of a picture playing time node and a synthesis effect code;
the second acquisition module is used for acquiring target audio data according to the first identification code;
and the synthesis module is used for synthesizing the target audio data and the plurality of pictures into a special effect video according to the second identification code, the third identification code and the synthesis effect code.
9. A video composition data processing apparatus, comprising:
at least one memory for storing a program;
at least one processor configured to load the program to perform a method of video composition data processing according to any one of claims 1 to 7.
10. A computer readable storage medium having stored therein processor executable instructions, which when executed by a processor, are for implementing a video composition data processing method as claimed in any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010410286.6A CN111741365B (en) | 2020-05-15 | 2020-05-15 | Video composition data processing method, system, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010410286.6A CN111741365B (en) | 2020-05-15 | 2020-05-15 | Video composition data processing method, system, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111741365A true CN111741365A (en) | 2020-10-02 |
CN111741365B CN111741365B (en) | 2021-10-26 |
Family
ID=72647290
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010410286.6A Active CN111741365B (en) | 2020-05-15 | 2020-05-15 | Video composition data processing method, system, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111741365B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114390354A (en) * | 2020-10-21 | 2022-04-22 | 西安诺瓦星云科技股份有限公司 | Program production method, device and system and computer readable storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100080532A1 (en) * | 2008-09-26 | 2010-04-01 | Apple Inc. | Synchronizing Video with Audio Beats |
CN110265057A (en) * | 2019-07-10 | 2019-09-20 | 腾讯科技(深圳)有限公司 | Generate multimedia method and device, electronic equipment, storage medium |
CN110336960A (en) * | 2019-07-17 | 2019-10-15 | 广州酷狗计算机科技有限公司 | Method, apparatus, terminal and the storage medium of Video Composition |
CN110677711A (en) * | 2019-10-17 | 2020-01-10 | 北京字节跳动网络技术有限公司 | Video dubbing method and device, electronic equipment and computer readable medium |
CN110688496A (en) * | 2019-09-26 | 2020-01-14 | 联想(北京)有限公司 | Method and device for processing multimedia file |
CN110933487A (en) * | 2019-12-18 | 2020-03-27 | 北京百度网讯科技有限公司 | Method, device and equipment for generating click video and storage medium |
US20200143839A1 (en) * | 2018-11-02 | 2020-05-07 | Soclip! | Automatic video editing using beat matching detection |
-
2020
- 2020-05-15 CN CN202010410286.6A patent/CN111741365B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100080532A1 (en) * | 2008-09-26 | 2010-04-01 | Apple Inc. | Synchronizing Video with Audio Beats |
US20200143839A1 (en) * | 2018-11-02 | 2020-05-07 | Soclip! | Automatic video editing using beat matching detection |
CN110265057A (en) * | 2019-07-10 | 2019-09-20 | 腾讯科技(深圳)有限公司 | Generate multimedia method and device, electronic equipment, storage medium |
CN110336960A (en) * | 2019-07-17 | 2019-10-15 | 广州酷狗计算机科技有限公司 | Method, apparatus, terminal and the storage medium of Video Composition |
CN110688496A (en) * | 2019-09-26 | 2020-01-14 | 联想(北京)有限公司 | Method and device for processing multimedia file |
CN110677711A (en) * | 2019-10-17 | 2020-01-10 | 北京字节跳动网络技术有限公司 | Video dubbing method and device, electronic equipment and computer readable medium |
CN110933487A (en) * | 2019-12-18 | 2020-03-27 | 北京百度网讯科技有限公司 | Method, device and equipment for generating click video and storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114390354A (en) * | 2020-10-21 | 2022-04-22 | 西安诺瓦星云科技股份有限公司 | Program production method, device and system and computer readable storage medium |
CN114390354B (en) * | 2020-10-21 | 2024-05-10 | 西安诺瓦星云科技股份有限公司 | Program production method, device and system and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111741365B (en) | 2021-10-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107832434B (en) | Method and device for generating multimedia play list based on voice interaction | |
CN111144937B (en) | Advertisement material determining method, device, equipment and storage medium | |
CN107145395B (en) | Method and device for processing task | |
CN114329298B (en) | Page presentation method and device, electronic equipment and storage medium | |
US11954536B2 (en) | Data engine | |
CN109862100B (en) | Method and device for pushing information | |
CN111241496B (en) | Method and device for determining small program feature vector and electronic equipment | |
US20220335977A1 (en) | Method and apparatus for editing object, electronic device and storage medium | |
CN111741365B (en) | Video composition data processing method, system, device and storage medium | |
CN111580808A (en) | Page generation method and device, computer equipment and storage medium | |
CN109116718B (en) | Method and device for setting alarm clock | |
CN112835577A (en) | Data processing method, data processing device, computer equipment and readable storage medium | |
US20220191345A1 (en) | System and method for determining compression rates for images comprising text | |
CN115756692A (en) | Method for automatically combining and displaying pages based on style attributes and related equipment thereof | |
CN114566173A (en) | Audio mixing method, device, equipment and storage medium | |
CN112508284B (en) | Display material pretreatment method, display material throwing system, display material throwing device and display material pretreatment equipment | |
CN111639260B (en) | Content recommendation method, content recommendation device and storage medium | |
CN111784377B (en) | Method and device for generating information | |
CN113641853A (en) | Dynamic cover generation method, device, electronic equipment, medium and program product | |
CN113707179A (en) | Audio identification method, device, equipment and medium | |
CN116385597B (en) | Text mapping method and device | |
CN111460269B (en) | Information pushing method and device | |
CN113535304B (en) | Method and device for inserting, displaying and editing third-party model in design software | |
CN112752098B (en) | Video editing effect verification method and device | |
CN115422906A (en) | Text typesetting method and device, electronic equipment and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240402 Address after: Room 3103, No. 32 Huaxia Road, Tianhe District, Guangzhou City, Guangdong Province, 510000 Patentee after: Guangzhou Mailiang Technology Co.,Ltd. Country or region after: China Address before: 3801, 3802, 3803, No. 285, Linhe East Road, Tianhe District, Guangzhou, Guangdong 510000 Patentee before: Guangzhou Xiaomai Network Technology Co.,Ltd. Country or region before: China |