CN111435995A - Method, device and system for generating dynamic picture - Google Patents

Method, device and system for generating dynamic picture Download PDF

Info

Publication number
CN111435995A
CN111435995A CN201910037109.5A CN201910037109A CN111435995A CN 111435995 A CN111435995 A CN 111435995A CN 201910037109 A CN201910037109 A CN 201910037109A CN 111435995 A CN111435995 A CN 111435995A
Authority
CN
China
Prior art keywords
processed
video
video frame
palette
graphics processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910037109.5A
Other languages
Chinese (zh)
Other versions
CN111435995B (en
Inventor
思磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910037109.5A priority Critical patent/CN111435995B/en
Publication of CN111435995A publication Critical patent/CN111435995A/en
Application granted granted Critical
Publication of CN111435995B publication Critical patent/CN111435995B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/85406Content authoring involving a specific file format, e.g. MP4 format

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the disclosure discloses a method, a device and a system for generating dynamic pictures. One embodiment of the method comprises: acquiring a video frame of a video to be processed, and sending the video frame of the video to be processed to a target graphic processor; receiving a processed video frame sent by a target graphic processor, wherein the processed video frame is obtained by processing a video frame included in a video to be processed by the target graphic processor; generating a palette based on the processed video frame; acquiring a video file with a preset format generated after the target graphic processor encodes the processed video frame; based on the palette, the video file is converted into a dynamic picture. The embodiment avoids the need of pre-storing the processed video frame when the dynamic picture is generated, saves the storage space, and improves the efficiency of generating the dynamic picture by processing the video frame of the video to be processed through the target graphic processor.

Description

Method, device and system for generating dynamic picture
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a method, a device and a system for generating dynamic pictures.
Background
Dynamic image (dynamic image) is a picture that produces some dynamic effect when a specific group of still images is switched at a specified frequency. A common Format of motion pictures is GIF (Graphics Interchange Format). In order to convert video into high quality motion pictures, it is often necessary to transfer each frame of video to a GPU (Graphics Processing Unit), which performs Processing such as rendering, filtering, and the like. The processed video frames are then transferred to a CPU (Central Processing Unit), the CPU generates a palette, and a moving picture is generated using the palette and each of the processed video frames. In the above process, since the process of generating the palette and the process of generating the moving picture need to use the processed video frame twice, the processed video frame generally needs to be saved in advance, or when the moving picture is generated, the GPU processes the video frame again to obtain the processed video frame.
Disclosure of Invention
The embodiment of the disclosure provides a method, a device and a system for generating a dynamic picture.
In a first aspect, an embodiment of the present disclosure provides a method for generating a dynamic picture, the method including: acquiring a video frame of a video to be processed, and sending the video frame of the video to be processed to a target graphic processor; receiving a processed video frame sent by a target graphic processor, wherein the processed video frame is obtained by processing a video frame included in a video to be processed by the target graphic processor; generating a palette based on the processed video frame; acquiring a video file with a preset format generated after the target graphic processor encodes the processed video frame; based on the palette, the video file is converted into a dynamic picture.
In some embodiments, the video file is obtained by encoding the processed video frame by the target graphics processor in a hard coding manner under the android system.
In some embodiments, the predetermined format is MP4 format.
In some embodiments, the palette is a global palette.
In a second aspect, an embodiment of the present disclosure provides an apparatus for generating a moving picture, the apparatus including: a first acquisition unit configured to acquire a video frame of a video to be processed and transmit the video frame of the video to be processed to a target graphic processor; the receiving unit is configured to receive a processed video frame sent by a target graphic processor, wherein the processed video frame is obtained by processing a video frame included in a video to be processed by the target graphic processor; a generating unit configured to generate a palette based on the processed video frame; the second acquisition unit is configured to acquire a video file in a preset format generated after the target graphics processor encodes the processed video frame; a conversion unit configured to convert the video file into a moving picture based on the palette.
In some embodiments, the video file is obtained by encoding the processed video frame by the target graphics processor in a hard coding manner under the android system.
In some embodiments, the predetermined format is MP4 format.
In some embodiments, the palette is a global palette.
In a third aspect, an embodiment of the present disclosure provides a system for generating a dynamic picture, including: the video processing device comprises a central processing unit and a graphic processor, wherein the central processing unit is configured to acquire a video to be processed and send a video frame included in the video to be processed to the graphic processor; the image processing device comprises a graphic processor, a video processing unit and a video processing unit, wherein the graphic processor is configured to receive a video frame included in a video to be processed and process the received video frame to obtain a processed video frame; sending the processed video frame to a central processing unit; coding the processed video frame to obtain a video file with a preset format and storing the video file; a central processor configured to receive the processed video frame sent by the graphics processor; generating a palette based on the processed video frame; acquiring a video file; based on the palette, the video file is converted into a dynamic picture.
In a fourth aspect, an embodiment of the present disclosure provides a terminal device, including: a central processing unit and a graphics processor; a storage device having one or more programs stored thereon; when the one or more programs are executed by the central processing unit, respectively, the central processing unit is caused to execute the method described in any implementation manner of the first aspect.
In a fifth aspect, embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, which computer program, when executed by a processor, implements the method as described in any of the implementations of the first aspect.
According to the method, the device and the system for generating the dynamic picture, the video frame of the video to be processed is obtained, the video frame of the video to be processed is sent to the target graphic processor, the processed video frame obtained after the video frame of the video to be processed is processed by the target graphic processor is received, the palette is generated based on the processed video frame, the video file generated after the processed video frame is coded by the target graphic processor is obtained, and finally the video file is converted into the dynamic picture based on the palette, so that the situation that the processed video frame needs to be stored in advance when the dynamic picture is generated is avoided, the storage space is saved, the video frame of the video to be processed is processed by the target graphic processor, and the efficiency of generating the dynamic picture is improved.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for generating a dynamic picture in accordance with an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of one application scenario of a method for generating a dynamic picture according to an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram illustrating an embodiment of an apparatus for generating a moving picture according to the present disclosure;
FIG. 5 is a timing diagram of one embodiment of a system for generating a moving picture according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a terminal device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant disclosure and are not limiting of the disclosure. It should be noted that, for the convenience of description, only the parts relevant to the related disclosure are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 of a method for generating a moving picture or an apparatus for generating a moving picture to which embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as a video playing application, an image processing application, a web browser application, social platform software, and the like.
The terminal devices 101, 102, 103 may be various electronic devices. Such as smart phones, tablets, desktop computers, and the like.
The server 105 may be a server that provides various services, such as a background data server that supports data such as videos played, pictures displayed, and the like on the terminal devices 101, 102, 103. The background data server may receive data sent by the terminal device, and may also send data to the terminal device.
It should be noted that the method for generating a moving picture provided by the embodiment of the present disclosure is generally executed by the terminal devices 101, 102, 103, and accordingly, the apparatus for generating a moving picture is generally disposed in the terminal devices 101, 102, 103. More specifically, the terminal device may include a central processing unit and a graphics processing unit, and the method for generating a dynamic picture provided by the embodiments of the present disclosure is generally performed by the central processing unit, and accordingly, the apparatus for generating a dynamic picture is generally disposed in the central processing unit.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In the case where the video for generating the moving picture does not need to be acquired from a remote place, the above system architecture may not include a network and a server, but only a terminal device.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for generating a dynamic picture in accordance with the present disclosure is shown. The method for generating the dynamic picture comprises the following steps:
step 201, acquiring a video frame of a video to be processed, and sending the video frame of the video to be processed to a target graphics processor.
In this embodiment, an execution subject of the method for generating a moving picture (for example, a central processing unit included in the terminal device shown in fig. 1) may remotely acquire a video frame of a video to be processed by a wired connection manner or a wireless connection manner, or locally acquire a video frame of a video to be processed. The execution body may then send the video frames of the video to be processed to the target graphics processor. Wherein the target graphics processor is a graphics processor for processing a video frame of a video to be processed, and the target graphics processor may be disposed in the execution body. A Graphics Processing Unit (GPU), also called a display core, a visual processor, and a display chip, is a microprocessor dedicated to image processing on electronic devices (such as a desktop computer, a portable computer, a tablet computer, and a smart phone).
Step 202, receiving the processed video frame sent by the target graphics processor.
In this embodiment, the execution body may receive a processed video frame sent by the target graphics processor. The processed video frame is obtained by processing the video frame included in the video to be processed by the target graphic processor. Specifically, the target graphics processor may process (e.g., render, filter, add special effects, etc.) the received video frame according to the operation of the user, so as to obtain a processed video frame. The processed video frame is then sent back to the execution body.
Step 203, generate a palette based on the processed video frame.
In this embodiment, the execution subject may generate the palette based on the processed video frame. The color palette is a tool commonly used in image processing for optimizing output, and may be used to optimize the color of the generated dynamic picture. In general, a palette used for generating a motion picture is divided into a global palette and a local palette. Wherein the global palette is applicable to all frames in the motion picture file, and the local palette is applicable only to a certain frame in the motion picture file corresponding to the local palette. In general, a palette may include 256 colors, and if the palette is a global palette, the generated dynamic picture includes 256 total colors of all frames, and if the palette is a local palette, for each frame of the generated dynamic picture, the frame may include 256 colors included in the palette corresponding to the frame.
It should be noted that the method for generating the color palette is a well-known technique that is widely researched and applied at present, and is not described herein again.
In some optional implementations of this embodiment, the palette generated in step 203 is a global palette. The global palette can be applied to each frame of the generated dynamic picture, thereby improving the efficiency of generating the dynamic picture and reducing the storage space occupied by the dynamic picture
And step 204, acquiring a video file in a preset format generated after the target graphics processor encodes the processed video frame.
In this embodiment, the execution subject may locally obtain a video file in a preset format generated after the target graphics processor encodes the processed video frame. In general, the video file may be generated by the target graphics processor and stored in a target storage area, and the execution subject may acquire the video file from the target storage area. Typically, the target storage area is a storage area local to the execution main body, such as a hard disk, a memory, a cache of a CPU, and the like. The storage space occupied by the generated video file is far smaller than that occupied by the video frame after storage processing, so that the storage space is saved.
Generally, the way that a non-CPU is used for encoding, such as a GPU, a dedicated DSP, an FPGA, an ASIC chip, etc., to encode a video frame is called hard coding. In this embodiment, the target graphics processor encodes the processed video frame in a hard coding manner. Hard coding may have a faster processing speed than soft coding (i.e., coding by a CPU), which helps to improve the efficiency of generating moving pictures.
In some optional implementation manners of this embodiment, the video file may be obtained by encoding, by the target graphics processor, the processed video frame in a hard coding manner under an Android system. Because the android system is widely used, the implementation mode can improve the compatibility of encoding the processed video.
In some optional implementations of this embodiment, the preset format is an MP4 format. MP4, also known as MPEG-4Part14, is a multimedia computer file format using the MPEG-4 standard, with the extension MP4, which is designed by the Moving Picture Experts Group (MPEG) and is based on digital audio and digital video. In practice, MP4 files are often used to provide video playback services to clients (especially mobile device clients) through content distribution platforms such as websites, client software, etc. Since the MP4 has the characteristics of high compressibility, good versatility, and the like, this implementation may contribute to improving the efficiency of generating a moving picture.
In step 205, the video file is converted into a motion picture based on the palette.
In this embodiment, the execution subject may convert the video file into the moving picture based on the palette. Generally, the moving picture may be a picture in GIF format. The method for converting a video file into a moving picture by using a palette is a well-known technique widely studied and applied at present, and is not described herein again.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for generating a moving picture according to the present embodiment. In the application scenario of fig. 3, the terminal device 301 includes a central processor 3011 and a graphics processor 3012 (i.e., a target graphics processor), and the central processor 3011 first obtains a video frame of the video 302 to be processed and sends the video frame of the video 302 to be processed to the graphics processor 3012. Subsequently, the central processor 3011 receives the processed video frame 303 sent from the graphic processor 3012. The processed video frame 303 is obtained by the graphics processor performing special effect processing on the received video frame according to the operation of the user. Central processor 3011 then generates palette 304 (e.g., a global palette) based on processed video frame 303. Finally, the video file 305 in the preset format (for example, MP4 format) generated by the graphics processor 3012 encoding the processed video frames 303 is obtained from the local cache. Finally, the cpu converts the video file 305 into a motion picture 306 based on the palette 304.
According to the method provided by the embodiment of the disclosure, the video frame of the video to be processed is acquired, the video frame of the video to be processed is sent to the target graphic processor, the processed video frame obtained by processing the video frame of the video to be processed by the target graphic processor is received, the palette is generated based on the processed video frame, the video file generated by encoding the processed video frame by the target graphic processor is acquired, and finally the video file is converted into the dynamic picture based on the palette, so that the situation that the processed video frame needs to be saved in advance when the dynamic picture is generated is avoided, the storage space is saved, and the efficiency of generating the dynamic picture is improved by processing the video frame of the video to be processed by the target graphic processor.
With further reference to fig. 4, as an implementation of the methods shown in the above-mentioned figures, the present disclosure provides an embodiment of an apparatus for generating a moving picture, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 4, the apparatus 400 for generating a moving picture of the present embodiment includes: a first acquisition unit 401 configured to acquire a video frame of a video to be processed and transmit the video frame of the video to be processed to a target graphics processor; a receiving unit 402, configured to receive a processed video frame sent by a target graphics processor, where the processed video frame is obtained by processing a video frame included in a video to be processed by the target graphics processor; a generating unit 403 configured to generate a palette based on the processed video frame; a second obtaining unit 404, configured to obtain a video file in a preset format generated after the target graphics processor encodes the processed video frame; a conversion unit 405 configured to convert the video file into a moving picture based on the palette.
In this embodiment, the first obtaining unit 401 may obtain the video frame of the video to be processed from a remote location or from a local location through a wired connection or a wireless connection. Then, the first obtaining unit 401 may send the video frame of the video to be processed to the target graphic processor. The target graphics processor is a graphics processor for processing video frames of a video to be processed, and may be disposed in an electronic device in which the apparatus 400 is disposed. A Graphics Processing Unit (GPU), also called a display core, a visual processor, and a display chip, is a microprocessor dedicated to image Processing on electronic devices (such as a desktop computer, a portable computer, a tablet computer, and a smart phone).
In this embodiment, the receiving unit 402 may receive a processed video frame sent by a target graphics processor. The processed video frame is obtained by processing the video frame included in the video to be processed by the target graphic processor. Specifically, the target graphics processor may process (e.g., render, filter, add special effects, etc.) the received video frame according to the operation of the user, so as to obtain a processed video frame. The processed video frames are then sent back to the device 400.
In this embodiment, the generation unit 403 may generate a palette based on the processed video frame. The palette is a tool commonly used in image processing for optimizing output, and may be used to optimize the generated dynamic picture. In general, a palette used for generating a motion picture is divided into a global palette and a local palette. Wherein the global palette is applicable to all frames in the motion picture file, and the local palette is applicable only to a certain frame in the motion picture file corresponding to the local palette. In general, a palette may include 256 colors, and if the palette is a global palette, the generated dynamic picture includes 256 total colors of all frames, and if the palette is a local palette, for each frame of the generated dynamic picture, the frame may include 256 colors included in the palette corresponding to the frame.
In this embodiment, the second obtaining unit 404 may obtain, locally or remotely, a video file in a preset format generated by encoding the processed video frame by the target graphics processor. In general, the video file may be generated by the target graphics processor and stored in the target storage area, and the second obtaining unit 404 may obtain the video file from the target storage area. Typically, the target storage area is a storage area local to the electronic device in which the apparatus 400 is located, such as a hard disk, a memory, a cache of a CPU, and the like.
Generally, the way that a non-CPU is used for encoding, such as a GPU, a dedicated DSP, an FPGA, an ASIC chip, etc., to encode a video frame is called hard coding. In this embodiment, the target graphics processor encodes the processed video frame in a hard coding manner.
In this embodiment, the conversion unit 405 may convert the video file into the moving picture based on the palette. Generally, the moving picture may be a picture in GIF format. The method for converting a video file into a moving picture by using a palette is a well-known technique widely studied and applied at present, and is not described herein again.
In some optional implementation manners of this embodiment, the video file is obtained by encoding, by the target graphics processor, the processed video frame in a hard coding manner under the android system.
In some optional implementations of this embodiment, the preset format is an MP4 format.
In some optional implementations of this embodiment, the palette is a global palette.
The apparatus provided by the above-mentioned embodiment of the present disclosure generates the color palette based on the processed video frames by acquiring the video frames of the video to be processed, sending the video frames of the video to be processed to the target graphic processor, receiving the processed video frames obtained by processing the video frames of the video to be processed by the target graphic processor, then obtaining a video file generated by the target graphic processor after encoding the processed video frame, and finally converting the video file into a dynamic picture based on the palette, thereby avoiding the need of saving the processed video frame in advance when generating the dynamic picture, because the storage space occupied by the generated video file is far less than that occupied by the video frame after storage processing, the storage space is saved, and the target graphic processor is used for processing the video frame of the video to be processed, so that the efficiency of generating the dynamic picture is improved.
With continued reference to FIG. 5, a timing diagram 500 of one embodiment of a system for processing information is shown, in accordance with the present application.
The system for processing information of the present application may include a central processor and a graphics processor. The central processing unit is configured to acquire video frames of a video to be processed and send the video frames included in the video to be processed to the graphics processor; the image processing device comprises a graphic processor, a video processing unit and a video processing unit, wherein the graphic processor is configured to receive a video frame included in a video to be processed and process the received video frame to obtain a processed video frame; sending the processed video frame to a central processing unit; coding the processed video frame to obtain a video file with a preset format and storing the video file; a central processor configured to receive the processed video frame sent by the graphics processor; generating a palette based on the processed video frame; acquiring a video file in a preset format generated by the graphics processor coding the processed video frame; based on the palette, the video file is converted into a dynamic picture.
As shown in fig. 5, in step 501, the central processor acquires a video frame of a video to be processed and sends the video frame of the video to be processed to the graphics processor.
In this embodiment, a central processing unit (for example, the central processing unit included in the terminal device shown in fig. 1) may obtain a video frame of a video to be processed from a remote location or from a local location through a wired connection or a wireless connection. The central processor may then send the video frames of the video to be processed to the target graphics processor.
Step 502, the graphics processor processes the received video frame to obtain a processed video frame.
In this embodiment, the graphics processor receives a video frame included in a video to be processed, and processes the received video frame to obtain a processed video frame.
The graphics processor is a graphics processor for processing video frames of a video to be processed, and may be disposed in the execution body. A Graphics Processing Unit (GPU), also called a display core, a visual processor, and a display chip, is a microprocessor dedicated to image Processing on electronic devices (such as a desktop computer, a portable computer, a tablet computer, and a smart phone). The graphics processor may process (e.g., render, filter, add special effects, etc.) the received video frame according to the operation of the user, so as to obtain a processed video frame.
Step 503, the graphics processor sends the processed video frame to the central processing unit, and codes the processed video frame to obtain a video file with a preset format and store the video file.
In this embodiment, the graphics processor may send the processed video frame obtained in step 502 to the central processor, and encode the processed video frame to obtain a video file in a preset format. In this embodiment, the target graphics processor encodes the processed video frame in a hard coding manner. Hard coding may have a faster processing speed than soft coding (i.e., coding by a CPU), which helps to improve the efficiency of generating moving pictures.
The video file may be generated by the target graphics processor and stored in a target storage area. Typically, the target storage area is a storage area local to the execution main body, such as a hard disk, a memory, a cache of a CPU, and the like. The storage space occupied by the generated video file is far smaller than that occupied by the video frame after storage processing, so that the storage space is saved.
In step 504, the central processing unit generates a palette based on the received processed video frames.
In this embodiment, the palette may be generated based on the received processed video frame. The palette is a tool commonly used in image processing for optimizing output, and may be used to optimize the generated dynamic picture. In general, a palette used for generating a motion picture is divided into a global palette and a local palette. Wherein the global palette is applicable to all frames in the motion picture file, and the local palette is applicable only to a certain frame in the motion picture file corresponding to the local palette. In general, a palette may include 256 colors, and if the palette is a global palette, the generated dynamic picture includes 256 total colors of all frames, and if the palette is a local palette, for each frame of the generated dynamic picture, the frame may include 256 colors included in the palette corresponding to the frame.
It should be noted that the method for generating the color palette is a well-known technique that is widely researched and applied at present, and is not described herein again.
Step 505, the central processing unit obtains a video file.
In this embodiment, the central processor may acquire the video file generated by the graphic processor from the target storage area. Generally, the way that a non-CPU is used for encoding, such as a GPU, a dedicated DSP, an FPGA, an ASIC chip, etc., to encode a video frame is called hard coding. In this embodiment, the target graphics processor encodes the processed video frame in a hard coding manner. Hard coding may have a faster processing speed than soft coding (i.e., coding by a CPU), which helps to improve the efficiency of generating moving pictures.
In step 506, the cpu converts the video file into a motion picture based on the palette.
In this embodiment, the cpu may convert the video file into the moving picture based on the color palette. Generally, the moving picture may be a picture in GIF format. The method for converting a video file into a moving picture by using a palette is a well-known technique widely studied and applied at present, and is not described herein again.
In the system provided by the above embodiment of the present disclosure, the central processing unit obtains the video frame of the video to be processed, and sends the video frame of the video to be processed to the graphics processing unit; the image processor receives and processes the received video frame to obtain a processed video frame, sends the processed video frame to the central processor, and codes the processed video frame to obtain a video file and stores the video file; the central processing unit receives the processed video frames, generates a color palette based on the processed video frames, then acquires the generated video files, and finally converts the video files into dynamic pictures based on the color palette. The system avoids the need of pre-storing the processed video frame when the dynamic picture is generated, and saves the storage space because the storage space occupied by the generated video file is far smaller than the storage space occupied by the stored processed video frame, and improves the efficiency of generating the dynamic picture by processing the video frame of the video to be processed through the graphic processor.
Referring now to fig. 6, shown is a schematic block diagram of a terminal device 600 suitable for use in implementing embodiments of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The terminal device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the use range of the embodiments of the present disclosure.
As shown in fig. 6, the terminal device 600 may include a processing apparatus 601, wherein the processing apparatus 601 includes a central processing unit, a graphic processor, and the processing apparatus 601 may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage apparatus 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the terminal apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc., output devices 607 including, for example, a liquid crystal display (L CD), speaker, vibrator, etc., storage devices 608 including, for example, magnetic tape, hard disk, etc., and communication devices 609.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be included in the terminal device; or may exist separately without being assembled into the terminal device. The computer-readable medium carries one or more programs which, when executed by a central processing unit included in the terminal device, cause the central processing unit to: acquiring a video frame of a video to be processed, and sending the video frame of the video to be processed to a target graphic processor; receiving a processed video frame sent by a target graphic processor, wherein the processed video frame is obtained by processing a video frame included in a video to be processed by the target graphic processor; generating a palette based on the processed video frame; acquiring a video file with a preset format generated after the target graphic processor encodes the processed video frame; based on the palette, the video file is converted into a dynamic picture.
Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including AN object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first capturing unit may also be described as a "unit that captures a video frame of a video to be processed and transmits the video frame of the video to be processed to a target graphics processor".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (14)

1. A method for generating a motion picture, comprising:
acquiring a video frame of a video to be processed, and sending the video frame of the video to be processed to a target graphic processor;
receiving a processed video frame sent by the target graphics processor, wherein the processed video frame is obtained by processing a video frame included in the video to be processed by the target graphics processor;
generating a palette based on the processed video frame;
acquiring a video file with a preset format generated after the target graphic processor encodes the processed video frame;
based on the palette, converting the video file into a dynamic picture.
2. The method of claim 1, wherein the video file is obtained by the target graphics processor encoding the processed video frame in a hard-coding manner under the android system.
3. The method of claim 1, wherein the preset format is an MP4 format.
4. The method of one of claims 1-3, wherein the palette is a global palette.
5. An apparatus for generating a moving picture, comprising:
a first acquisition unit configured to acquire a video frame of a video to be processed and transmit the video frame of the video to be processed to a target graphics processor;
a receiving unit, configured to receive a processed video frame sent by the target graphics processor, where the processed video frame is obtained by processing a video frame included in the video to be processed by the target graphics processor;
a generating unit configured to generate a palette based on the processed video frame;
a second obtaining unit, configured to obtain a video file in a preset format generated after the target graphics processor encodes the processed video frame;
a conversion unit configured to convert the video file into a dynamic picture based on the palette.
6. The apparatus of claim 5, wherein the video file is obtained by the target graphics processor encoding the processed video frame in a hard coding manner under the android system.
7. The apparatus of claim 5, wherein the preset format is an MP4 format.
8. The apparatus of one of claims 5-7, wherein the palette is a global palette.
9. A system for generating a motion picture, comprising: central processing unit and graphics processor, wherein:
the central processor is configured to acquire video frames of a video to be processed and send the video frames included in the video to be processed to the graphics processor;
the graphics processor is configured to receive a video frame included in the video to be processed, and process the received video frame to obtain a processed video frame; sending the processed video frame to the central processor; coding the processed video frame to obtain a video file with a preset format and storing the video file;
the central processor is configured to receive the processed video frame sent by the graphics processor; generating a palette based on the processed video frame; acquiring the video file; based on the palette, converting the video file into a dynamic picture.
10. The system of claim 9, wherein the video file is encoded by the graphics processor using hard coding under the android system for the processed video frame.
11. The system of claim 9, wherein the preset format is MP4 format.
12. The system of one of claims 9-11, wherein the palette is a global palette.
13. A terminal device, comprising:
a central processing unit and a graphics processor;
storage means for storing one or more programs;
when executed by the central processor, the one or more programs, respectively, cause the central processor to perform the method of any of claims 1-4.
14. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-4.
CN201910037109.5A 2019-01-15 2019-01-15 Method, device and system for generating dynamic picture Active CN111435995B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910037109.5A CN111435995B (en) 2019-01-15 2019-01-15 Method, device and system for generating dynamic picture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910037109.5A CN111435995B (en) 2019-01-15 2019-01-15 Method, device and system for generating dynamic picture

Publications (2)

Publication Number Publication Date
CN111435995A true CN111435995A (en) 2020-07-21
CN111435995B CN111435995B (en) 2022-05-17

Family

ID=71580857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910037109.5A Active CN111435995B (en) 2019-01-15 2019-01-15 Method, device and system for generating dynamic picture

Country Status (1)

Country Link
CN (1) CN111435995B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006246110A (en) * 2005-03-04 2006-09-14 Matsushita Electric Ind Co Ltd Apparatus and system for transmitting video
CN103971391A (en) * 2013-02-01 2014-08-06 腾讯科技(深圳)有限公司 Animation method and device
US20140219634A1 (en) * 2013-02-05 2014-08-07 Redux, Inc. Video preview creation based on environment
US20140375881A1 (en) * 2013-06-24 2014-12-25 Arcsoft (Hangzhou) Multimedia Technology Co., Ltd. Method of Converting a Video File to a Graphics Interchange Format Image Using Same Palette Table for Consecutive Frames
CN105678680A (en) * 2015-12-30 2016-06-15 魅族科技(中国)有限公司 Image processing method and device
CN105828182A (en) * 2016-05-13 2016-08-03 北京思特奇信息技术股份有限公司 Method and system for real-time rending video based on OpenGL
CN106464888A (en) * 2014-03-17 2017-02-22 诺基亚技术有限公司 Method and technical equipment for video encoding and decoding
CN107241598A (en) * 2017-06-29 2017-10-10 贵州电网有限责任公司 A kind of GPU coding/decoding methods for multichannel h.264 video conference
CN108055587A (en) * 2017-11-30 2018-05-18 星潮闪耀移动网络科技(中国)有限公司 Sharing method, device, mobile terminal and the storage medium of image file
CN108769738A (en) * 2018-06-15 2018-11-06 广州酷狗计算机科技有限公司 Method for processing video frequency, device, computer equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006246110A (en) * 2005-03-04 2006-09-14 Matsushita Electric Ind Co Ltd Apparatus and system for transmitting video
CN103971391A (en) * 2013-02-01 2014-08-06 腾讯科技(深圳)有限公司 Animation method and device
US20140219634A1 (en) * 2013-02-05 2014-08-07 Redux, Inc. Video preview creation based on environment
US20140375881A1 (en) * 2013-06-24 2014-12-25 Arcsoft (Hangzhou) Multimedia Technology Co., Ltd. Method of Converting a Video File to a Graphics Interchange Format Image Using Same Palette Table for Consecutive Frames
CN106464888A (en) * 2014-03-17 2017-02-22 诺基亚技术有限公司 Method and technical equipment for video encoding and decoding
CN105678680A (en) * 2015-12-30 2016-06-15 魅族科技(中国)有限公司 Image processing method and device
CN105828182A (en) * 2016-05-13 2016-08-03 北京思特奇信息技术股份有限公司 Method and system for real-time rending video based on OpenGL
CN107241598A (en) * 2017-06-29 2017-10-10 贵州电网有限责任公司 A kind of GPU coding/decoding methods for multichannel h.264 video conference
CN108055587A (en) * 2017-11-30 2018-05-18 星潮闪耀移动网络科技(中国)有限公司 Sharing method, device, mobile terminal and the storage medium of image file
CN108769738A (en) * 2018-06-15 2018-11-06 广州酷狗计算机科技有限公司 Method for processing video frequency, device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张立军: "浅谈电视节目制作中常见的图像和视频格式及转换", 《中国科技信息》 *

Also Published As

Publication number Publication date
CN111435995B (en) 2022-05-17

Similar Documents

Publication Publication Date Title
US10777231B2 (en) Embedding thumbnail information into video streams
JP6242029B2 (en) Technology for low power image compression and display
CN110809189B (en) Video playing method and device, electronic equipment and computer readable medium
CN110290398B (en) Video issuing method and device, storage medium and electronic equipment
CN110856036A (en) Remote desktop implementation method, interaction method, device, equipment and storage medium
CN110806846A (en) Screen sharing method, screen sharing device, mobile terminal and storage medium
CN115767181A (en) Live video stream rendering method, device, equipment, storage medium and product
CN114040251A (en) Audio and video playing method, system, storage medium and computer program product
US9538208B2 (en) Hardware accelerated distributed transcoding of video clips
CN115761090A (en) Special effect rendering method, device, equipment, computer readable storage medium and product
CN111355978B (en) Video file processing method and device, mobile terminal and storage medium
CN114125432A (en) Video data processing method, device, equipment and storage medium
JP2023538825A (en) Methods, devices, equipment and storage media for picture to video conversion
CN112053286A (en) Image processing method, image processing device, electronic equipment and readable medium
CN111669476A (en) Watermark processing method, device, electronic equipment and medium
CN111435995B (en) Method, device and system for generating dynamic picture
CN116248889A (en) Image encoding and decoding method and device and electronic equipment
CN113766266B (en) Audio and video processing method, device, equipment and storage medium
JP6395971B1 (en) Modification of graphical command token
WO2022213801A1 (en) Video processing method, apparatus, and device
CN111355997A (en) Video file generation method and device, mobile terminal and storage medium
US10659826B2 (en) Cloud streaming service system, image cloud streaming service method using application code, and device therefor
KR102273142B1 (en) System for cloud streaming service, method of image cloud streaming service using application code conversion and apparatus for the same
CN116744026A (en) Voice-to-wheat confluence method and equipment
CN117544740A (en) Video recording method, apparatus, device, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant