CN114501079A - Method for processing multimedia data and related device - Google Patents

Method for processing multimedia data and related device Download PDF

Info

Publication number
CN114501079A
CN114501079A CN202210112722.0A CN202210112722A CN114501079A CN 114501079 A CN114501079 A CN 114501079A CN 202210112722 A CN202210112722 A CN 202210112722A CN 114501079 A CN114501079 A CN 114501079A
Authority
CN
China
Prior art keywords
video
data processing
option
interface
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210112722.0A
Other languages
Chinese (zh)
Inventor
李志�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202210112722.0A priority Critical patent/CN114501079A/en
Publication of CN114501079A publication Critical patent/CN114501079A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/44Encoding
    • G06F8/447Target code generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2387Stream processing in response to a playback request from an end-user, e.g. for trick-play

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The present disclosure provides a method and related apparatus for processing multimedia data. The method is applied to a server and comprises the following steps: receiving a data processing request of terminal equipment; providing a corresponding interface for the terminal equipment based on the data processing request, wherein data processing options are provided in the interface; receiving data processing option information sent by the terminal equipment, wherein the data processing option information is obtained by selecting data processing options in the interface; generating an executable file corresponding to the data processing option information; and sending the executable file to the terminal device.

Description

Method for processing multimedia data and related device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and related device for processing multimedia data.
Background
FFmpeg is a set of open source computer programs that can be used to record, convert digital audio, video, and convert them into streams.
For processing of audio-video data, FFmpeg can be generally adopted for implementation. However, in the development process based on audio/video data, research and development personnel need to consume a lot of energy and time to grope the video image processing method, and the development efficiency is reduced.
Disclosure of Invention
In view of this, the present disclosure proposes a method and related device for processing multimedia data.
In a first aspect of the present disclosure, a method for processing multimedia data is provided, which is applied to a server, and includes:
receiving a data processing request of terminal equipment;
providing a corresponding interface for the terminal equipment based on the data processing request, wherein data processing options are provided in the interface;
receiving data processing option information sent by the terminal equipment, wherein the data processing option information is obtained by selecting data processing options in the interface;
generating an executable file corresponding to the data processing option information; and
and sending the executable file to the terminal equipment.
In a second aspect of the present disclosure, a server is provided, including:
one or more processors, memory; and
one or more programs;
wherein the one or more programs are stored in the memory and executed by the one or more processors, the programs comprising instructions for performing the method according to the first aspect.
In a third aspect of the present disclosure, a system for processing multimedia data is provided, including:
the server of the second aspect; and
a terminal device configured to:
sending a data processing request to the server;
receiving and displaying an interface provided by the server;
sending data processing option information generated based on the interface to the server; and
and receiving an executable file which is sent by the server and corresponds to the data processing option information.
In a fourth aspect of the disclosure, a non-transitory computer-readable storage medium containing a computer program is provided, which, when executed by one or more processors, causes the processors to perform the method of the first aspect.
In a fifth aspect of the present disclosure, there is provided a computer program product comprising computer program instructions which, when run on a computer, cause the computer to perform the method of the first aspect.
According to the method and the related device for processing the multimedia data, the executable file is generated according to the data processing request and the data processing options, so that the development cycle of the audio and video service is greatly shortened.
Drawings
In order to more clearly illustrate the technical solutions in the present disclosure or related technologies, the drawings needed to be used in the description of the embodiments or related technologies are briefly introduced below, and it is obvious that the drawings in the following description are only embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 illustrates a schematic diagram of an exemplary system provided by an embodiment of the present disclosure.
Fig. 2A illustrates a schematic diagram of an exemplary first interface, in accordance with an embodiment of the present disclosure.
Fig. 2B shows a schematic diagram of an exemplary second interface, in accordance with an embodiment of the present disclosure.
Fig. 3A illustrates a flow diagram of an exemplary process of splicing videos, in accordance with an embodiment of the present disclosure.
Fig. 3B illustrates a schematic diagram of an exemplary target video, in accordance with an embodiment of the present disclosure.
Fig. 3C shows a flow diagram of another exemplary process of stitching video according to an embodiment of the present disclosure.
Fig. 3D shows a flow diagram of an exemplary video processing procedure in accordance with an embodiment of the present disclosure.
Fig. 3E illustrates a schematic diagram of an exemplary target video, in accordance with an embodiment of the present disclosure.
Fig. 3F shows a flow diagram of an exemplary video mirroring process in accordance with an embodiment of the present disclosure.
Fig. 3G shows a schematic diagram of a target video with a literal watermark added according to an embodiment of the disclosure.
Fig. 3H shows a schematic diagram of a picture watermarked target video according to an embodiment of the present disclosure.
Fig. 4 shows a flow diagram of an exemplary method provided by an embodiment of the present disclosure.
Fig. 5 shows a schematic diagram of a hardware structure of an exemplary electronic device provided in this embodiment.
Detailed Description
For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
It is to be noted that technical terms or scientific terms used in the embodiments of the present disclosure should have a general meaning as understood by those having ordinary skill in the art to which the present disclosure belongs, unless otherwise defined. The use of "first," "second," and similar terms in the embodiments of the disclosure is not intended to indicate any order, quantity, or importance, but rather to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
FFmpeg is a set of cross-platform open source libraries which can be used for recording, converting digital audio and video and converting the digital audio and video into streams, comprises a leading audio/video coding library and is compatible with Windows, Linux and MacOS operating systems. In terms of protocols, the FFmpeg supports various protocols such as HTTP, RTP, RTSP, RTMP, RTSP and the like; in the audio and video format, various formats such as AVI, FLV, MOV, MP4 and the like are supported. FFmpeg provides encapsulation and de-encapsulation of multiple media formats, including multiple audio and video coding, streaming media of multiple protocols, multiple color format conversion, multiple sampling rate conversion, multiple rate conversion, etc. Also, FFmpeg provides a variety of rich plug-in modules, including encapsulated and de-encapsulated plug-ins, encoded and decoded plug-ins, and the like. FFmpeg provides two using modes of a command line and an SDK, and is convenient for developers with different requirements to use.
However, in the development process using FFmpeg, developers are required to search for a video image processing method, which generally consumes much effort and time, and thus, development efficiency is reduced.
In view of this, the present disclosure provides a method and related device for processing multimedia data, which directly generates an executable file for a user according to a data processing request and a data processing option of the user, thereby greatly reducing a development period.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 provided by an embodiment of the present disclosure.
As shown in fig. 1, the system 100 may be used for processing multimedia data (e.g., video, audio, pictures), and may further include a server 200 and a terminal device 300.
The server 200 may be a server deployed inside an enterprise or a business server purchased or rented by the enterprise. The number of the servers 200 may be one or more, and in the case where the servers 200 are plural, a server cluster may be formed using a distributed architecture. In some embodiments, as shown in fig. 1, the system 100 may further include a database server 202, the database server 202 may be used to store data, and the server 200 may call the corresponding data from the database server 202 as needed.
The terminal device 300 may be various types of fixed terminals or mobile terminals. For example, the terminal device 300 may be a computer device such as a personal computer or a notebook computer. The terminal device 300 and the server 200 can communicate through a wired network or a wireless network, thereby realizing data interaction.
In some embodiments, the server 200 may be a server providing FFmpeg development service, the terminal device 300 may be an operating device corresponding to a general audio-video user or a developer of audio-video software, and related functions of FFmpeg may be used by the server 200.
Returning to fig. 1, a user or a developer may send a data processing request 302 to the server 200 through the terminal device 300, where the data processing request 302 may be used to request the server 200 to provide a service related to audio and video production.
The manner in which the terminal device 300 sends the data processing request 302 to the server 200 may be by a tool such as a browser or software installed in the terminal device 300. For example, the data processing request 302 is sent to the server 200 by entering the web address/IP address or the like corresponding to the server 200 in the address bar of the browser. For another example, the data processing request 302 is sent to the server 200 providing the software service by opening the corresponding audio-video production software.
After receiving the data processing request 302, the server 200 may provide the terminal device 300 with a corresponding interface in which data processing options may be provided for the user to select based on.
In some embodiments, the data processing request 302 corresponds to a data processing mode of the terminal device 300, and the server 200 may determine the data processing mode of the terminal device 300 according to the received data processing request 302. And the interface provided by the server 200 to the terminal device 300 may be different according to the data processing mode of the terminal device 300.
In some embodiments, the server 200 provides the terminal device 300 with the first interface when the data processing mode of the terminal device 300 is the first processing mode. This first processing mode may be, for example, a lightweight data processing mode, and may generate some simple source code (low code mode) according to the data processing option selected by the user.
Fig. 2A illustrates a schematic diagram of an exemplary first interface 204, in accordance with an embodiment of the present disclosure.
As shown in fig. 2A, data processing options are provided in the first interface 204, wherein the data processing options may further include video processing operation options and picture processing operation options.
Video processing operation options may include, for example, merging videos, cutting videos, video watermarking, video de-watermarking, video plus audio, video screenshots, modifying video bitrate, modifying video color space, video-to-GIF, video key frames, video scaling up/down, and so forth. In the first interface 204, a "more" button is arranged at the bottom of the video processing operation option window, and after the user clicks the button, more video processing operation options can be expanded in the first interface 204 for the user to select.
Picture processing operation options may include, for example, merging pictures, picture-to-video, picture watermarking, picture compression, multi-picture to GIF, and so forth. In addition, in the first interface 204, a "more" button may also be disposed at the bottom of the picture processing operation option window, and after the user clicks the button, more picture processing operation options may be expanded in the first interface 204 for the user to select.
In some embodiments, a programming language option may also be included in the first interface 204 for the user to select a programming language, and the corresponding source code may be generated based on the programming language selected by the user. Therefore, the user can make the system 100 generate the source code written by the desired programming language for the user according to the actual requirement, and the customized requirement of the user is better met. As shown in fig. 2A, the programming language options in the first interface 204 provide several basic options, which may be common options in general scenarios, e.g., Java, gold, C language, Python. In some embodiments, these basic options may be replaced, reordered, etc. according to the frequency of use by the user (e.g., the number of times the source code was generated using the programming language option in the last 7 days exceeds 10), so as to better conform to the usage habits of the user. In addition, in the first interface 204, a "more" button may be further disposed on one side of the programming language selection window, and after the user clicks the button, more programming language options may be expanded in the first interface 204 for the user to select. In some embodiments, considering the generality of the Java language, the programming language option may be selected as a default to "Java" option, which may be adjusted by the user if the user needs to change the programming language.
As shown in fig. 2A, in some embodiments, the first interface 204 may further include a source code presentation window 2042, where the source code presentation window 2042 is used to present source code written in a programming language corresponding to the programming language option selected by the user.
In some embodiments, the server 200 provides the terminal device 300 with the second interface when the data processing mode of the terminal device 300 is the second processing mode. The second processing mode may be, for example, a heavyweight data processing mode, and may generate a Software Development Kit (SDK) for invoking a corresponding micro-service API interface according to a data processing option selected by a user.
Fig. 2B shows a schematic diagram of an exemplary second interface 206, in accordance with an embodiment of the present disclosure.
As shown in fig. 2B, data processing options are provided in the second interface 206, wherein the data processing options may further include a video processing interface option and a picture processing interface option.
Video processing interface options may include, for example, obtaining all video material, obtaining video base information, spatially-dimensional stitched video, temporally-dimensional stitched video, making video picture-in-picture, cropping video at a specified resolution, cropping video for a specified duration, modifying video bitrate, modifying video color space, making video images, scaling up/down video, etc., obtaining video first frame, obtaining video last frame, squared video stitched, etc. It is understood that fig. 2B is merely an exemplary illustration of some video processing interface options, which may be added, decreased or modified according to actual needs.
Picture processing interface options may include, for example, obtaining picture essential information, picture splicing, single picture to video, picture sequence to GIF, obtaining picture thumbnails, picture compression, picture sequence to video, picture watermarking, and so forth. It is understood that fig. 2B is merely an exemplary illustration of some photo processing interface options, which may be added, decreased, or modified according to actual needs.
The terminal device 300 then presents the user with selectable data processing options via the presentation interface 204 or 206. By selecting the data processing option in the interface 204 or 206, the user can generate a message carrying the corresponding data processing option information 304 by the terminal device 300 and send the message to the server 200. For example, after selecting a desired data processing option, the user clicks a "submit" button in the interface 204 or 206, so that the terminal device 300 generates a message carrying the corresponding data processing option information 304 based on the selected data processing option and sends the message to the server 200. In some embodiments, as shown in fig. 2A and 2B, a "reset" button may also be provided in the interfaces 204 and 206, and after the user clicks on the button, the user may reset the option he has selected, thereby resuming the selection.
The server 200, after receiving the data processing option information transmitted by the terminal device 300, may generate an executable file 208 corresponding to the data processing option information based on the data processing option information and then transmit the executable file 208 to the terminal device 300.
The executable file 208 generated by the server 200 may be different according to a data processing mode of the terminal device 300.
In some embodiments, when the data processing mode of the terminal device 300 is the first processing mode, the server 200 may generate a source code file corresponding to the data processing option information 304. In some embodiments, when the data processing option information 304 includes programming language option information, the server 200 may generate a source code file corresponding to the data processing option information 304 based on a programming language corresponding to the programming language option. For example, if the user selects a Java option, the generated source code file is written in the Java language.
After receiving the source code file, the terminal device 300 may display the source code corresponding to the source code file in the source code display window 2042. In some embodiments, the user may also click a "copy-on-a-click" button in the interface 204 to cause the terminal device 300 to copy the source code down, and then the user may paste the source code into a program file corresponding to the project he or she developed. In other embodiments, the user may click on the "export file" button in the interface 204, so that the terminal device 300 generates a corresponding executable file, for example, a java file, based on the source code, and the user may directly place the file in the project to invoke the corresponding method.
It is understood that, in some embodiments, the server 200 may store the source code written in each programming language corresponding to the data processing option in the database server 202 in advance, match the source code stored in the database server 202 based on the data processing option information 304, and then return the source code obtained by matching to the terminal device 300. It will be appreciated that a piece of source code may correspond to one data processing option, and that when a user selects a plurality of data processing options, the generated source code may be multiple pieces and may be formed sequentially in a source code file.
In other embodiments, when the data processing mode of the terminal device 300 is the second processing mode, the server 200 may generate a software development kit corresponding to the data processing option information 304. Specifically, the server 200 may determine the API interface required to be called by the corresponding data processing option according to the data processing option information 304, and then generate the corresponding Software Development Kit (SDK) based on the API interface required to be called by the data processing option.
In some embodiments, the API interface may be a callable interface provided by the micro service architecture, and different API interfaces may implement different data processing functions, so that selecting different API interfaces may correspondingly call the processing function of the multimedia data provided by the corresponding micro service. The microservice architecture may be integrated with the server 200 to form a cluster of servers that provide microservices. When generating the SDK, the server 200 may generate a corresponding SDK according to the API interface of the micro service to be called. It is understood that the SDKs may correspond to data processing options one to one, or one SDK may correspond to a plurality of data processing options.
After the terminal device 300 receives the SDK, the user only needs to import the SDK into the item, and can call the corresponding method according to the interface calling mode when necessary.
It can be seen that the executable file (e.g., source code or SDK) generated by the embodiment of the present disclosure can be directly added to the product (or software) under development by the user, so that the product (or software) has the multimedia data processing function that can be realized by the corresponding executable file, and the user does not need to write the executable file by himself. Particularly, when the user uses the FFmpeg to realize the multimedia data processing function, the user does not need to spend excessive energy on learning the underlying technology of the FFmpeg, thereby greatly reducing the development difficulty and improving the development efficiency.
In some embodiments, the data processing options may include a first video merge option for stitching the plurality of videos in a spatial dimension, e.g., stitching the plurality of videos in a multi-grid manner in the spatial dimension. The terminal device 300 may send the corresponding first parameter to the server 200 when running the SDK corresponding to the first video merging option. It can be understood that the SDK corresponding to the first video merging option may be integrated in a certain audio/video production software, and the running of the SDK corresponding to the first video merging option may be that a function corresponding to the first video merging option is selected in the audio/video production software, so that the SDK is run. After the SDK is run, the corresponding API interface in the micro service architecture of the server 200 is called accordingly, so as to transmit the corresponding first parameter, so that the micro service in the micro service architecture processes data based on the first parameter. Accordingly, the corresponding API interface of the server 200 may receive the first parameter and then perform the video merging process based on the first parameter. The first parameter may include parameters such as the size of the canvas, the number of rows and columns of the spliced video, the resolution of each video file/video stream, and the specifically selected video file/video stream.
When performing video merging processing, a corresponding micro service in the micro service architecture of the server 200 may first create a blank overlay canvas (overlay canvas); for example, with the nullsrc of FFmpeg, a blank canvas of a specified size is created according to the canvas size in the first parameter.
Then, the corresponding positions of the pictures of the plurality of video files on the covering canvas can be determined according to the number of rows and columns of the spliced video in the first parameter. For example, when the number of rows and columns is 2, the videos are spliced by using four grids, and the corresponding positions of the video files may be upper left (upperleft), lower left (lowerleft), upper right (upperright) and lower right (lowerlight). The user may also specify the location of the corresponding video file/stream in the four grid in the first parameter, which may be both in the first parameter and known by the server 200.
Then, according to the resolution of each video stream in the first parameter and the specifically selected video file/video stream, the pictures of the plurality of video files can be correspondingly superposed at the corresponding positions on the covering canvas to obtain the target video. For example, the overlay canvas is used as a reference channel and named as [ base ], then each input video file or video stream is taken out by four general channels [0: v ], [1: v ], [2: v ] and [3: v ], and is re-placed to the video channels [ upperleft ], [ upperright ], [ lowerleft ] and [ lowerright ] of the specified positions, and the positions or resolution sizes of the four channels are configured according to the specified parameters in the first parameters. The video files/streams in the channel may be adapted depending on the configuration of the channel. The four video files/streams that are eventually input will be tiled in turn on the empty overlay canvas generated by nullsrc.
With reference to fig. 3A, the channel accumulation process is illustrated as follows:
①[base][upperleft]——>[tmp1]
②[tmp1][upperright]——>[tmp2]
③[tmp2][lowerleft]——>[tmp3]
(tmp 3 Lowerright) -result
Based on the above principle, after the API interface receives the parameters including the size of the canvas, the number of rows and columns of the spliced video, the resolution of each video file/video stream, and the first parameter of the specifically selected video file/video stream, the server 200 may complete the splicing of the video according to the splicing principle, so as to obtain the target video, as shown in fig. 3B.
Then, the server 200 may transmit the target video to the terminal device 300, and the terminal device 300 may display the target video with the display effect as shown in fig. 3B.
In a traditional video monitoring system, a supervisor needs to watch a plurality of shot images, and is difficult to correspond scattered shot videos with actual geographic positions, so that global real-time monitoring and rapid backtracking and searching of historical events cannot be performed on a large scene, and massive scattered monitoring video resources cannot be seen and cannot be understood. Therefore, in some embodiments, an optional application scene of the video stitching processing may be a monitoring scene of a city command center system, so that a multi-angle picture acquired by a plurality of monitoring cameras can be fused into one picture by using the stitched video, thereby facilitating the global monitoring of monitoring personnel.
In some embodiments, the data processing option may include a second video merge option for temporally dimensional stitching of the plurality of videos, e.g., temporally sequential stitching of the plurality of videos in the temporal dimension. When the terminal device 300 runs the SDK corresponding to the second video merging option, the corresponding second parameter may be sent to the server 200, and the manner of sending the second parameter into the server 200 is similar to that of sending the first parameter, which is not described herein again. The second parameter may include parameters such as the specific selected video file/video stream and its splicing order.
In order to be compatible with each video format, before splicing the videos, the corresponding microservices in the microservice architecture of the server 200 may process the video files/video streams into files of a predetermined format according to the video files/video streams specifically selected by the user in the second parameter. For example, video files/streams may be packaged into a TS format container. The TS is known as MPEG2-TS in its entirety, and the MPEG2-TS format is characterized by being independently decodable from any segment of the video stream.
And then, combining the files with the preset formats in sequence according to the splicing sequence in the second parameter to obtain the target video. For example, the concat filter provided by FFmpeg may be used to merge in the order of splicing and then output to a specified file to obtain the target video, as shown in fig. 3C.
Then, the server 200 may transmit the target video to the terminal device 300, and the terminal device 300 may display the target video.
Through the video splicing of the time dimension, a plurality of videos are spliced into one video in sequence, and the video splicing method is convenient for users to use and store. For example, in some activities, a user takes multiple videos using a camera, a mobile phone, or other devices, but the videos are all taken in the activity and have the same theme, and the videos are combined into one video through video splicing in a time dimension, so that the user can share and store the videos conveniently.
In some embodiments, the data processing options may include a picture-in-picture option for processing multiple videos into picture-in-picture video. When the terminal device 300 runs the SDK corresponding to the pip option, a corresponding third parameter may be sent to the server 200, and the manner of sending the third parameter into the server 200 is similar to that of the first parameter, which is not described herein again. The third parameter may include parameters such as the position and size of the specifically selected video file/video stream and the foreground video or background video to which the video file/video stream belongs.
The micro service corresponding to the micro service architecture of the server 200 may first create a background layer (overlay1) and a foreground layer (overlay _ 1-n) for the background video file and the foreground video file, respectively, according to the video file/video stream specifically selected by the user in the third parameter and the foreground video or background video to which the video file/video stream belongs.
And then, the background video file and the foreground video file are respectively and correspondingly bound with the background layer (overlay1) and the foreground layers (overlay _ 1-n) to obtain a target video. Each layer correspondingly receives a corresponding video file, and the overlay of the background video file is also used for receiving the layer of the foreground video file. For example, the video may be bound by using a video overlay technique characteristic of an overlay filter provided by FFmpeg, and the position and size of the foreground video may be set according to parameters such as the position and size of the foreground video in the third parameter. The picture-in-picture processing is shown in fig. 3D, and the resulting target video is shown in fig. 3E.
It is understood that the media types received by the overlay layer can be video files, picture files and video streams, and can be compatible with all video image formats, such as mp4, flv or avi, etc., and picture formats, such as gif, png or bmp, etc. Therefore, the background and the foreground of the obtained target video in picture can be videos or pictures, and the target video obtained by synthesis can be diversified.
Then, the server 200 may transmit the target video to the terminal device 300, and the terminal device 300 may display the target video with the display effect as shown in fig. 3E.
In some embodiments, an optional application scenario for picture-in-picture processing may be a surveillance scenario for a city command center system, such that picture-in-picture video may be utilized to provide multi-angle viewing. For example, in the background video, there may be a monitoring picture of an angle of a main landmark position of a city (e.g., an intersection of a busy street), and in the foreground video 1 and the foreground video 2, there may be monitoring pictures of the main landmark position acquired from other two angles. As an optional embodiment, the foreground video 1 and the foreground video 2 may be enlarged monitoring pictures for a certain target object (e.g., an object, a pedestrian) in the background video, so that the enlarged pictures for observing the target object from two angles can be provided in the foreground video, and further, a multi-angle observation picture for the target object is provided, thereby enriching the monitoring function and improving the monitoring efficiency.
In some embodiments, an optional application scene of picture-in-picture processing may also be a live game scene, so that picture-in-picture video may be utilized to provide a multi-angle view of the game. Taking a football game as an example, in the background video, there may be a panoramic picture of the entire court, and in the foreground video 1 and the foreground video 2, there may be pictures of the court taken from two other angles. As an alternative embodiment, the foreground video 1 and the foreground video 2 may be enlarged pictures of a certain player in the background video, which are observed from two angles, so that a multi-angle observation picture for the player can be provided in the picture-in-picture video, and the game watching effect and the user experience are improved.
In some embodiments, the data processing options may include a video mirroring option for processing the video as a mirrored video. When the terminal device 300 runs the SDK corresponding to the video mirroring option, a corresponding fourth parameter may be sent to the server 200, and a manner of sending the fourth parameter into the server 200 is similar to that of the first parameter, which is not described herein again. The fourth parameter may include parameters of a specifically selected video file/video stream, a resolution and a starting point of cropping, a direction of mirroring (e.g., horizontal mirroring or vertical mirroring, a rotation angle, etc.), and the like.
Fig. 3F shows a schematic diagram of an exemplary video mirroring process according to an embodiment of the present disclosure. As shown in fig. 3F, the corresponding micro service in the micro service architecture of the server 200 may divide the selected video file/video stream into the same first video stream and the same second video stream according to the video file/video stream specifically selected by the user in the fourth parameter. Wherein the first video stream is kept as it is and the second video stream is subjected to the corresponding processing of mirroring. The selected video file/video stream is split into the same first and second video streams, for example, by means of the AVFilter filter module of FFmpeg.
Then, the second video stream is clipped according to the resolution and the starting point of clipping in the fourth parameter. For example, the crop operation is performed on the second video stream by using a crop filter provided by the FFmpeg, so as to obtain a cropped second video stream. The cropping operation pair comprises four parameters of w: h: x: y, wherein w and h represent the resolution of the cropped video, and x and y represent the starting point of cropping relative to the original video origin.
And then, carrying out mirror image processing on the second video stream after the cutting processing according to the mirror image direction in the fourth parameter to obtain the second video stream after the mirror image processing. Wherein the filter provided by the FFmpeg used may be different depending on the direction of the mirror image. For example, the vflip filter can perform vertical flip operation on the video, the hflip filter can perform horizontal flip operation on the video, and the rotate filter can perform rotation of a specified angle on the video.
And finally, merging the second video stream after the mirror image processing with the first video stream to obtain a target video, wherein a layer corresponding to the second video stream after the mirror image processing is positioned at a top layer of the target video. For example, the second video stream is merged into the original overlay layer and displayed on the uppermost layer, and the target video is output.
Then, the server 200 may transmit the target video to the terminal device 300, and the terminal device 300 may display the target video.
By performing mirror image processing on the video, particularly, selecting a part of the video to perform mirror image processing, the display effect of the video can be enriched. For example, the central area of the video is selected to form a mirror image, and the edge area of the video is kept as it is, so that the central area and the edge area of the whole video form a mirror image contrast, and the video after mirror image processing has a special effect, thereby improving the interest and the appreciation of the video.
In some embodiments, the data processing options may include a watermarking option for watermarking the video, where the watermark may be text or pictures, and is used to mark the video or add a special effect. When the SDK corresponding to the watermark adding option is run, the terminal device 300 may send a corresponding fifth parameter to the server 200, where a manner of sending the fifth parameter into the server 200 is similar to that of the first parameter, and details are not described here. The fifth parameter may be different for different types of watermarks. For example, for text watermarking, the fifth parameter may include parameters such as a specifically selected video file/video stream, watermark text, font format, font color, font size, text background color, and location of text watermark with respect to the video origin. For another example, for a picture watermark, the fifth parameter may include parameters such as the width and height of a specifically selected video file/video stream, watermark picture and watermark, and the position of the picture watermark relative to the video origin.
The micro service corresponding to the micro service architecture of the server 200 may add a watermark to a corresponding position of the selected video file/video stream according to the video file/video stream specifically selected by the user in the fifth parameter to obtain a target video, and fig. 3G and fig. 3H respectively show schematic diagrams of the target video added with the text watermark and the image watermark according to the embodiment of the present disclosure. Wherein, the text watermarking can be realized by means of a drawtext filter provided by FFmpeg; the image watermarking can be realized by a filter chain formed by two filters, namely movie and overlay, provided by FFmpeg, wherein the movie filter specifies the width and the height of the watermark image and the watermark according to a fifth parameter, and the overlay filter is used for setting the relative position of the watermark and the video origin according to the fifth parameter.
By adding the character or picture watermark in the video, the source of the video can be marked, and the effect of protecting intellectual property is achieved.
It should be noted that, in some embodiments, a video file is taken as an example for description. It is understood that the embodiments of the present disclosure can also be applied to other multimedia files, for example, picture files. Therefore, based on the same idea, the embodiments may also process the picture file, or perform a mixed process on the video file and the picture file.
It should be further noted that, the foregoing embodiment of running the SDK and performing data processing is described with respect to an embodiment of adding a corresponding SDK into a terminal product, and actually, as can be seen from the first processing mode proposed in the embodiment of the present disclosure, when a corresponding source code is directly added to the terminal product, the terminal device 300 itself can perform corresponding data processing without calling an API interface of a micro service provided by a server to implement data processing, and these embodiments of performing data processing operation by the terminal device also belong to the scope of the present disclosure.
It can be seen that, by providing two data processing modes, the embodiment of the present disclosure may allow a user to select a corresponding implementation mode according to the data processing requirement.
If the processing of video pictures in the user service is not a strong requirement, the processing operation will only occur under some specific requirements, and there is not much complex processing logic, and the user can now use the first processing mode (lightweight) to obtain the required executable source code.
If the video picture processing is frequently required in the user service and more details of the video picture processing are involved, the user may choose to adopt the second processing mode (heavyweight level) to implement the corresponding function by calling the API interface. In some embodiments, the interface of the corresponding function can be freely combined and used in the business of the project through the more comprehensive video picture processing operation related interface provided by the micro service architecture.
The system 100 for processing multimedia data provided by the embodiment of the disclosure performs vector fusion on each single function point of the FFmpeg by analyzing the audio and video processing technology of the FFmpeg open source library and combining with actual services, is compatible with different programming languages and system environments, and provides an interactive interface with reasonable layout.
The system 100 for processing multimedia data provided by the embodiment of the present disclosure provides an executable program supporting multiple programming languages for developers and provides video image processing microservices for developers by providing a simple and easy-to-use interactive manner. Through the solution provided by the embodiment of the disclosure, the situation that a great deal of energy is spent on searching data due to lack of professional knowledge of video image processing is avoided, and meanwhile, the development efficiency can be greatly improved and the development cost can be reduced.
The embodiment of the disclosure also provides a method for processing multimedia data. Fig. 4 illustrates a flow diagram of an exemplary method 400 provided by an embodiment of the present disclosure. The method 400 may be applied to the server 200 of fig. 1. As shown in fig. 4, the method 400 may further include the following steps.
At step 402, server 200 may receive a data processing request (e.g., request 302 of fig. 1) of a terminal device (e.g., terminal device 300 of fig. 1).
In step 404, the server 200 may provide a corresponding interface to the terminal device based on the data processing request, where the interface provides data processing options.
In some embodiments, the data processing request corresponds to a data processing mode of the terminal device; based on the data processing request, providing a corresponding interface to the terminal device, including: determining the data processing mode of the terminal equipment based on the data processing request; in response to determining that the data processing mode of the terminal device is a first processing mode, providing a first interface (e.g., interface 204 of FIG. 2A) to the terminal device; or in response to determining that the data processing mode of the terminal device is a second processing mode, providing a second interface (e.g., interface 206 of fig. 2B) to the terminal device.
In some embodiments, the first interface further includes a source code presentation window (e.g., window 2042 of fig. 2A) for presenting source code written in a programming language corresponding to the programming language option.
In step 406, the server 200 may receive data processing option information (e.g., option information 304 of fig. 1) sent by the terminal device, where the data processing option information is obtained by selecting a data processing option in the interface.
At step 408, server 200 may generate an executable file (e.g., file 208 of FIG. 1) corresponding to the data processing option information.
In some embodiments, the data processing options provided in the first interface include programming language options, generating a source code file corresponding to the data processing option information, including: and generating a source code file corresponding to the data processing option information based on the programming language corresponding to the programming language option.
In some embodiments, generating a software development kit corresponding to the data processing option information comprises: determining an interface which needs to be called by a data processing option in the data processing option information; and generating the software development kit based on the interface required to be called by the data processing option.
The server 200 may send the executable file to the terminal device in step 410.
In some embodiments, generating an executable file corresponding to the data processing option information comprises: responding to the data processing mode of the terminal equipment as a first processing mode, and generating a source code file corresponding to the data processing option information; or responding to the data processing mode of the terminal equipment as a second processing mode, and generating a software development kit corresponding to the data processing option information.
In some embodiments, the data processing option comprises a first video merge option; the method further comprises the following steps: receiving a first parameter sent by the terminal device through running a software development kit corresponding to the first video merging option; creating a blank overlay canvas according to the first parameter; determining corresponding positions of pictures of a plurality of video files on the covering canvas according to the first parameters; correspondingly overlapping the pictures of the plurality of video files at corresponding positions on the covering canvas according to the first parameters to obtain a target video; and sending the target video to the terminal equipment.
In some embodiments, the data processing option comprises a second video merge option; the method further comprises the following steps: receiving a second parameter sent by the terminal device through running a software development kit corresponding to the second video merging option; processing the plurality of video files into files with preset formats respectively according to the second parameters; according to the second parameter, combining the files with the preset format in sequence to obtain a target video; and sending the target video to the terminal equipment.
In some embodiments, the data processing options include a picture-in-picture option; the method further comprises the following steps: receiving a third parameter sent by the terminal equipment by running a software development kit corresponding to the picture-in-picture option; respectively creating a background layer and a foreground layer for the background video file and the foreground video file according to the third parameter; the background video file and the foreground video file are respectively and correspondingly bound with the background layer and the foreground layer to obtain a target video; and sending the target video to the terminal equipment.
In some embodiments, the data processing options include a video mirroring option; the method further comprises the following steps: receiving a fourth parameter sent by the terminal device by running a software development kit corresponding to the video mirror image option; according to the fourth parameter, dividing the video file into a first video stream and a second video stream which are the same; according to the fourth parameter, the second video stream is subjected to clipping processing; carrying out mirror image processing on the second video stream after the cutting processing according to the fourth parameter; merging the second video stream after the mirror image processing with the first video stream to obtain a target video, wherein a layer corresponding to the second video stream after the mirror image processing is positioned at a top layer of the target video; and sending the target video to the terminal equipment.
In some embodiments, the data processing options include a watermarking option; the method further comprises the following steps: receiving a fifth parameter sent by the terminal device by running a software development kit corresponding to the watermarking option; adding a watermark at a corresponding position of the video file according to the fifth parameter to obtain a target video; and sending the target video to the terminal equipment.
It should be noted that the method of the embodiments of the present disclosure may be executed by a single device, such as a computer or a server. The method of the embodiment can also be applied to a distributed scene and completed by the mutual cooperation of a plurality of devices. In such a distributed scenario, one of the devices may only perform one or more steps of the method of the embodiments of the present disclosure, and the devices may interact with each other to complete the method.
It should be noted that the above describes some embodiments of the disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Based on the same inventive concept, corresponding to any of the above-mentioned embodiments of the method 400, the present disclosure further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the program, the method 400 according to any of the above embodiments is implemented.
Fig. 5 shows a more specific hardware structure diagram of the electronic device 500 provided in this embodiment. The device 500 may be used to implement the server 200 of fig. 1, and in some embodiments, the terminal device 300 may also have the structure of the device 500. As shown in fig. 5, the apparatus 500 may include: a processor 502, a memory 504, an input/output interface 506, a communication interface 508, and a bus 510. Wherein the processor 502, memory 504, input/output interface 506, and communication interface 508 are communicatively coupled to each other within the device via bus 510.
The processor 502 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solutions provided in the embodiments of the present specification.
The Memory 504 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random Access Memory), a static storage device, a dynamic storage device, or the like. The memory 504 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present specification is implemented by software or firmware, the relevant program codes are stored in the memory 504 and called to be executed by the processor 502.
The input/output interface 506 is used for connecting an input/output module to realize information input and output. The i/o module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. The input devices may include a keyboard, a mouse, a touch screen, a microphone, various sensors, etc., and the output devices may include a display, a speaker, a vibrator, an indicator light, etc.
The communication interface 508 is used for connecting a communication module (not shown in the figure) to realize communication interaction between the device and other devices. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).
Bus 510 includes a path that transfers information between the various components of the device, such as processor 502, memory 504, input/output interface 506, and communication interface 508.
It should be noted that although the above-described device only shows the processor 502, the memory 504, the input/output interface 506, the communication interface 508, and the bus 510, in a specific implementation, the device may also include other components necessary for normal operation. In addition, those skilled in the art will appreciate that the above-described apparatus may also include only those components necessary to implement the embodiments of the present description, and not necessarily all of the components shown in the figures.
The electronic device of the above embodiment is used to implement the method 400 corresponding to any one of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Based on the same inventive concept, corresponding to any of the above-described embodiment methods, the present disclosure also provides a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method 400 as described in any of the above embodiments.
Computer-readable media of the present embodiments, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
The computer instructions stored in the storage medium of the above embodiment are used to enable the computer to execute the method 400 according to any embodiment, and have the beneficial effects of the corresponding method embodiment, which are not described herein again.
The present disclosure also provides a computer program product comprising a computer program, corresponding to any of the embodiment methods 400 described above, based on the same inventive concept. In some embodiments, the computer program is executable by one or more processors to cause the processors to perform the method 400. Corresponding to the execution subject corresponding to each step in the embodiments of the method 400, the processor executing the corresponding step may be the corresponding execution subject.
The computer program product of the above embodiment is used to enable a processor to execute the method 400 according to any of the above embodiments, and has the advantages of the corresponding method embodiments, which are not described herein again.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the idea of the present disclosure, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present disclosure as described above, which are not provided in detail for the sake of brevity.
In addition, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown in the provided figures for simplicity of illustration and discussion, and so as not to obscure the embodiments of the disclosure. Further, devices may be shown in block diagram form in order to avoid obscuring embodiments of the disclosure, and also in view of the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the embodiments of the disclosure are to be implemented (i.e., specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that the embodiments of the disclosure can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative instead of restrictive.
While the present disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic ram (dram)) may use the discussed embodiments.
The disclosed embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Therefore, any omissions, modifications, equivalents, improvements, and the like that may be made within the spirit and principles of the embodiments of the disclosure are intended to be included within the scope of the disclosure.

Claims (16)

1. A method for processing multimedia data, applied to a server, comprises the following steps:
receiving a data processing request of terminal equipment;
providing a corresponding interface for the terminal equipment based on the data processing request, wherein data processing options are provided in the interface;
receiving data processing option information sent by the terminal equipment, wherein the data processing option information is obtained by selecting data processing options in the interface;
generating an executable file corresponding to the data processing option information; and
and sending the executable file to the terminal equipment.
2. The method of claim 1, wherein the data processing request corresponds to a data processing mode of the terminal device; based on the data processing request, providing a corresponding interface to the terminal device, including:
determining the data processing mode of the terminal equipment based on the data processing request;
providing a first interface to the terminal device in response to determining that the data processing mode of the terminal device is a first processing mode; or
And providing a second interface to the terminal equipment in response to determining that the data processing mode of the terminal equipment is a second processing mode.
3. The method of claim 2, wherein generating an executable file corresponding to the data processing option information comprises:
responding to the data processing mode of the terminal equipment as a first processing mode, and generating a source code file corresponding to the data processing option information; or alternatively
And generating a software development kit corresponding to the data processing option information in response to the data processing mode of the terminal equipment being a second processing mode.
4. The method of claim 3, wherein the data processing options provided in the first interface include programming language options, generating a source code file corresponding to the data processing option information comprising:
and generating a source code file corresponding to the data processing option information based on the programming language corresponding to the programming language option.
5. The method of claim 4, wherein the first interface further comprises a source code presentation window for presenting source code written in a programming language corresponding to the programming language option.
6. The method of claim 3, wherein generating a software development kit corresponding to the data processing option information comprises:
determining an interface which needs to be called by a data processing option in the data processing option information; and
and generating the software development kit based on the interfaces required to be called by the data processing options.
7. The method of claim 3, wherein the data processing option comprises a first video merge option; the method further comprises the following steps:
receiving a first parameter sent by the terminal device through running a software development kit corresponding to the first video merging option;
creating a blank overlay canvas according to the first parameter;
determining corresponding positions of pictures of a plurality of video files on the covering canvas according to the first parameters;
correspondingly overlapping the pictures of the plurality of video files at corresponding positions on the covering canvas according to the first parameters to obtain a target video; and
and sending the target video to the terminal equipment.
8. The method of claim 3, wherein the data processing option comprises a second video merge option; the method further comprises the following steps:
receiving a second parameter sent by the terminal device through running a software development kit corresponding to the second video merging option;
processing the plurality of video files into files with preset formats respectively according to the second parameters;
according to the second parameter, combining the files with the preset format in sequence to obtain a target video; and
and sending the target video to the terminal equipment.
9. The method of claim 3, wherein the data processing option comprises a picture-in-picture option; the method further comprises the following steps:
receiving a third parameter sent by the terminal equipment by running a software development kit corresponding to the picture-in-picture option;
respectively creating a background layer and a foreground layer for the background video file and the foreground video file according to the third parameter;
the background video file and the foreground video file are respectively and correspondingly bound with the background layer and the foreground layer to obtain a target video; and
and sending the target video to the terminal equipment.
10. The method of claim 3, wherein the data processing options include a video mirroring option; the method further comprises the following steps:
receiving a fourth parameter sent by the terminal device by running a software development kit corresponding to the video mirror image option;
according to the fourth parameter, dividing the video file into a first video stream and a second video stream which are the same;
according to the fourth parameter, the second video stream is subjected to clipping processing;
carrying out mirror image processing on the second video stream after the cutting processing according to the fourth parameter;
merging the second video stream after the mirror image processing with the first video stream to obtain a target video, wherein a layer corresponding to the second video stream after the mirror image processing is positioned at a top layer of the target video; and
and sending the target video to the terminal equipment.
11. The method of claim 3, wherein the data processing options include a watermarking option; the method further comprises the following steps:
receiving a fifth parameter sent by the terminal device by running a software development kit corresponding to the watermarking option;
adding a watermark at a corresponding position of the video file according to the fifth parameter to obtain a target video; and
and sending the target video to the terminal equipment.
12. A server, comprising:
one or more processors, memory; and
one or more programs;
wherein the one or more programs are stored in the memory and executed by the one or more processors, the programs comprising instructions for performing the method of any of claims 1-11.
13. A system for processing multimedia data, comprising:
the server of claim 12; and
a terminal device configured to:
sending a data processing request to the server;
receiving and displaying an interface provided by the server;
sending data processing option information generated based on the interface to the server; and
and receiving an executable file which is sent by the server and corresponds to the data processing option information.
14. The system of claim 13, wherein the executable file is a source code file corresponding to the data processing option information; or, the executable file is a software development kit corresponding to the data processing option information.
15. A non-transitory computer-readable storage medium containing a computer program which, when executed by one or more processors, causes the processors to perform the method of any one of claims 1-11.
16. A computer program product comprising computer program instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1-11.
CN202210112722.0A 2022-01-29 2022-01-29 Method for processing multimedia data and related device Pending CN114501079A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210112722.0A CN114501079A (en) 2022-01-29 2022-01-29 Method for processing multimedia data and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210112722.0A CN114501079A (en) 2022-01-29 2022-01-29 Method for processing multimedia data and related device

Publications (1)

Publication Number Publication Date
CN114501079A true CN114501079A (en) 2022-05-13

Family

ID=81478007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210112722.0A Pending CN114501079A (en) 2022-01-29 2022-01-29 Method for processing multimedia data and related device

Country Status (1)

Country Link
CN (1) CN114501079A (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030140174A1 (en) * 2002-01-08 2003-07-24 Tsutomu Ohishi Method for generating application for information processing apparatus and image forming apparatus
US20050021552A1 (en) * 2003-06-02 2005-01-27 Jonathan Ackley Video playback image processing
CN108111874A (en) * 2016-11-16 2018-06-01 腾讯科技(深圳)有限公司 A kind of document handling method, terminal and server
CN109640175A (en) * 2018-11-21 2019-04-16 齐乐无穷(北京)文化传媒有限公司 A kind of block chain encipher-decipher method based on video file
CN110168599A (en) * 2017-10-13 2019-08-23 华为技术有限公司 A kind of data processing method and terminal
CN110908643A (en) * 2018-09-14 2020-03-24 阿里巴巴集团控股有限公司 Configuration method, device and system of software development kit
CN111193876A (en) * 2020-01-08 2020-05-22 腾讯科技(深圳)有限公司 Method and device for adding special effect in video
CN111386708A (en) * 2017-10-19 2020-07-07 拉扎尔娱乐公司 System and method for broadcasting live media streams
US20200301679A1 (en) * 2018-08-09 2020-09-24 Snapiot Inc System for creating mobile and web applications from a graphical workflow specification
CN112131102A (en) * 2020-08-28 2020-12-25 山东浪潮通软信息科技有限公司 Software project management system, equipment and medium in micro-service mode
CN112230982A (en) * 2020-10-15 2021-01-15 北京达佳互联信息技术有限公司 Material processing method and device, electronic equipment and storage medium
CN112860227A (en) * 2021-02-08 2021-05-28 杭州玳数科技有限公司 Functional code component development method
CN112966457A (en) * 2021-02-26 2021-06-15 严伟豪 Graphical cloud development platform
CN113014996A (en) * 2021-02-18 2021-06-22 上海哔哩哔哩科技有限公司 Video generation method and device
CN113050938A (en) * 2021-03-08 2021-06-29 杭州海康机器人技术有限公司 Visual software development system, method, device and computer storage medium
CN113076155A (en) * 2020-01-03 2021-07-06 阿里巴巴集团控股有限公司 Data processing method and device, electronic equipment and computer storage medium
CN113168332A (en) * 2019-02-22 2021-07-23 深圳市欢太科技有限公司 Data processing method and device and mobile terminal
CN113590098A (en) * 2021-07-30 2021-11-02 中电金信软件有限公司 Software development kit SDK generation method and device and electronic equipment
CN113709549A (en) * 2021-08-24 2021-11-26 北京市商汤科技开发有限公司 Special effect data packet generation method, special effect data packet generation device, special effect data packet image processing method, special effect data packet image processing device, special effect data packet image processing equipment and storage medium
CN113891113A (en) * 2021-09-29 2022-01-04 阿里巴巴(中国)有限公司 Video clip synthesis method and electronic equipment

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030140174A1 (en) * 2002-01-08 2003-07-24 Tsutomu Ohishi Method for generating application for information processing apparatus and image forming apparatus
US20050021552A1 (en) * 2003-06-02 2005-01-27 Jonathan Ackley Video playback image processing
CN108111874A (en) * 2016-11-16 2018-06-01 腾讯科技(深圳)有限公司 A kind of document handling method, terminal and server
CN110168599A (en) * 2017-10-13 2019-08-23 华为技术有限公司 A kind of data processing method and terminal
CN111386708A (en) * 2017-10-19 2020-07-07 拉扎尔娱乐公司 System and method for broadcasting live media streams
US20200301679A1 (en) * 2018-08-09 2020-09-24 Snapiot Inc System for creating mobile and web applications from a graphical workflow specification
CN110908643A (en) * 2018-09-14 2020-03-24 阿里巴巴集团控股有限公司 Configuration method, device and system of software development kit
CN109640175A (en) * 2018-11-21 2019-04-16 齐乐无穷(北京)文化传媒有限公司 A kind of block chain encipher-decipher method based on video file
CN113168332A (en) * 2019-02-22 2021-07-23 深圳市欢太科技有限公司 Data processing method and device and mobile terminal
CN113076155A (en) * 2020-01-03 2021-07-06 阿里巴巴集团控股有限公司 Data processing method and device, electronic equipment and computer storage medium
CN111193876A (en) * 2020-01-08 2020-05-22 腾讯科技(深圳)有限公司 Method and device for adding special effect in video
CN112131102A (en) * 2020-08-28 2020-12-25 山东浪潮通软信息科技有限公司 Software project management system, equipment and medium in micro-service mode
CN112230982A (en) * 2020-10-15 2021-01-15 北京达佳互联信息技术有限公司 Material processing method and device, electronic equipment and storage medium
CN112860227A (en) * 2021-02-08 2021-05-28 杭州玳数科技有限公司 Functional code component development method
CN113014996A (en) * 2021-02-18 2021-06-22 上海哔哩哔哩科技有限公司 Video generation method and device
CN112966457A (en) * 2021-02-26 2021-06-15 严伟豪 Graphical cloud development platform
CN113050938A (en) * 2021-03-08 2021-06-29 杭州海康机器人技术有限公司 Visual software development system, method, device and computer storage medium
CN113590098A (en) * 2021-07-30 2021-11-02 中电金信软件有限公司 Software development kit SDK generation method and device and electronic equipment
CN113709549A (en) * 2021-08-24 2021-11-26 北京市商汤科技开发有限公司 Special effect data packet generation method, special effect data packet generation device, special effect data packet image processing method, special effect data packet image processing device, special effect data packet image processing equipment and storage medium
CN113891113A (en) * 2021-09-29 2022-01-04 阿里巴巴(中国)有限公司 Video clip synthesis method and electronic equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
王晓;踪琳;: "基于OpenCV视觉库的嵌入式视频处理系统", 电子质量, no. 03 *
贾磊: "无线视频信号采集系统设计", 科技资讯 *
黄乐丹: "动态链接库与VB多媒体程序设计", 温州职业技术学院学报, no. 01 *

Similar Documents

Publication Publication Date Title
US10629166B2 (en) Video with selectable tag overlay auxiliary pictures
CN105376500B (en) Video processing apparatus for generating panoramic video and method thereof
KR101392626B1 (en) Method for handling multiple video streams
TWI674797B (en) Methods and apparatus for spherical region presentation
US10891032B2 (en) Image reproduction apparatus and method for simultaneously displaying multiple moving-image thumbnails
US11563915B2 (en) Media content presentation
JP6754968B2 (en) A computer-readable storage medium that stores a video playback method, video playback device, and video playback program.
US20140147100A1 (en) Methods and systems of editing and decoding a video file
EP2477415A2 (en) Method and system of encoding and decoding media content
CN113891113A (en) Video clip synthesis method and electronic equipment
US20120030253A1 (en) Data generating device and data generating method, and data processing device and data processing method
CN112667184A (en) Display device
CN113497963B (en) Video processing method, device and equipment
CN111918074A (en) Live video fault early warning method and related equipment
CN114501079A (en) Method for processing multimedia data and related device
US20110167346A1 (en) Method and system for creating a multi-media output for presentation to and interaction with a live audience
TWI820490B (en) Methods and systems for implementing scene descriptions using derived visual tracks
CN113453069B (en) Display device and thumbnail generation method
US11483492B2 (en) Immersive video experience including rotation
CN111367598B (en) Method and device for processing action instruction, electronic equipment and computer readable storage medium
CN116566961A (en) Method for processing multimedia data and related equipment
TW202232958A (en) Methods and systems for derived immersive tracks
CN116527983A (en) Page display method, device, equipment, storage medium and product
CN116980631A (en) File processing method, apparatus, program product, computer device, and medium
CN114760525A (en) Video generation and playing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination