WO2018184464A1 - 一种图片文件处理方法、装置及存储介质 - Google Patents

一种图片文件处理方法、装置及存储介质 Download PDF

Info

Publication number
WO2018184464A1
WO2018184464A1 PCT/CN2018/079442 CN2018079442W WO2018184464A1 WO 2018184464 A1 WO2018184464 A1 WO 2018184464A1 CN 2018079442 W CN2018079442 W CN 2018079442W WO 2018184464 A1 WO2018184464 A1 WO 2018184464A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
image
code stream
picture file
stream data
Prior art date
Application number
PCT/CN2018/079442
Other languages
English (en)
French (fr)
Inventor
王诗涛
刘晓宇
陈家君
黄晓政
丁飘
刘海军
罗斌姬
陈新星
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018184464A1 publication Critical patent/WO2018184464A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234318Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234336Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by media transcoding, e.g. video is transformed into a slideshow of still pictures or audio is converted into text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440236Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by media transcoding, e.g. video is transformed into a slideshow of still pictures, audio is converted into text

Definitions

  • the present application relates to the field of computer technologies, and in particular, to a picture file processing method, apparatus, and storage medium.
  • the download traffic of terminal devices has increased significantly.
  • the image file traffic accounts for a large proportion.
  • a large number of image files also put a lot of pressure on the network transmission bandwidth load. If you can reduce the size of the image file, it will not only improve the loading speed, but also save a lot of bandwidth and storage costs.
  • the embodiment of the present application provides a method, a device, and a storage medium for processing a picture file.
  • the RGBA data is obtained by decoding the first code stream data and the second code stream data respectively, thereby realizing the use of the video codec mode while retaining Transparency data ensures the quality of the image file.
  • the embodiment of the present application provides a method for processing a picture file, which is applied to a computing device, including:
  • An embodiment of the present application provides a picture file processing apparatus, including:
  • processor and a memory coupled to the processor, the memory having machine readable instructions executable by the processor, the processor executing the machine readable instructions to:
  • Embodiments of the present application provide a non-transitory computer readable storage medium storing machine readable instructions for causing a processor to perform the method described above.
  • FIG. 1 is a schematic diagram of an implementation environment of a method for processing a picture file according to an embodiment of the present application
  • FIG. 1b is a schematic diagram of an internal structure of a computing device used to implement a method for processing a picture file according to an embodiment of the present application;
  • FIG. 1 is a schematic flowchart of a method for processing a picture file according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart diagram of another method for processing a picture file according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of a multi-frame image included in a dynamic format image file according to an embodiment of the present disclosure
  • FIG. 4 is a schematic flowchart diagram of another method for processing a picture file according to an embodiment of the present application.
  • FIG. 4b is a schematic diagram of RGB data to YUV data according to an embodiment of the present application.
  • 4c is a diagram showing an example of transparency data to YUV data provided by an embodiment of the present application.
  • 4d is a diagram showing an example of transparency data to YUV data provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a picture header information according to an embodiment of the present application.
  • FIG. 5b is a schematic diagram of an image feature information data segment according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a user-defined information data segment according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a package of a picture file in a static format according to an embodiment of the present disclosure
  • FIG. 6b is a schematic diagram of a package of a picture file in a dynamic format according to an embodiment of the present disclosure
  • FIG. 7 is a diagram showing an example of encapsulation of another static format image file according to an embodiment of the present application.
  • FIG. 7b is a diagram showing an example of encapsulation of another dynamic format image file according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of a frame header information according to an embodiment of the present application.
  • FIG. 8b is a schematic diagram of an image frame header information according to an embodiment of the present application.
  • FIG. 8c is a schematic diagram of a transparent channel frame header information according to an embodiment of the present application.
  • FIG. 9 is a schematic flowchart diagram of another method for processing a picture file according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic flowchart diagram of another method for processing a picture file according to an embodiment of the present disclosure.
  • FIG. 11 is a schematic flowchart diagram of another method for processing a picture file according to an embodiment of the present disclosure.
  • FIG. 12 is a schematic flowchart diagram of another method for processing a picture file according to an embodiment of the present disclosure.
  • FIG. 13 is a schematic flowchart diagram of another method for processing a picture file according to an embodiment of the present disclosure.
  • FIG. 14 is a schematic structural diagram of an encoding apparatus according to an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of an encoding apparatus according to an embodiment of the present disclosure.
  • FIG. 14 is a schematic structural diagram of an encoding apparatus according to an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of an encoding apparatus according to an embodiment of the present disclosure.
  • FIG. 15 is a schematic structural diagram of another coding apparatus according to an embodiment of the present disclosure.
  • FIG. 16 is a schematic structural diagram of a decoding apparatus according to an embodiment of the present application.
  • FIG. 16b is a schematic structural diagram of a decoding apparatus according to an embodiment of the present disclosure.
  • FIG. 16c is a schematic structural diagram of a decoding apparatus according to an embodiment of the present application.
  • FIG. 16 is a schematic structural diagram of a decoding apparatus according to an embodiment of the present disclosure.
  • FIG. 16e is a schematic structural diagram of a decoding apparatus according to an embodiment of the present application.
  • FIG. 17 is a schematic structural diagram of another decoding apparatus according to an embodiment of the present disclosure.
  • FIG. 18 is a schematic structural diagram of a picture file processing apparatus according to an embodiment of the present disclosure.
  • FIG. 19 is a schematic structural diagram of another picture file processing apparatus according to an embodiment of the present disclosure.
  • FIG. 20 is a schematic structural diagram of another picture file processing apparatus according to an embodiment of the present disclosure.
  • FIG. 21 is a schematic structural diagram of another picture file processing apparatus according to an embodiment of the present disclosure.
  • FIG. 22 is a system architecture diagram of a picture file processing system according to an embodiment of the present disclosure.
  • FIG. 23 is a schematic diagram of an encoding module according to an embodiment of the present application.
  • FIG. 24 is a schematic diagram of a decoding module according to an embodiment of the present application.
  • FIG. 25 is a schematic structural diagram of a terminal device according to an embodiment of the present disclosure.
  • some embodiments of the present application provide a method, a device, and a storage medium for processing a picture file, which can respectively encode RGB data and transparency data through a video coding mode, and can ensure a picture file while improving a compression ratio of the picture file. the quality of.
  • the encoding device acquires RGBA data corresponding to the first image in the image file, and obtains RGB data and transparency data of the first image by separating the RGBA data, Encoding the RGB data of the first image according to the first video coding mode to generate first code stream data; encoding the transparency data of the first image according to the second video coding mode to generate second code stream data; One stream of data and the second stream of data are written into the stream of data streams.
  • the compression ratio of the image file can be improved, and the size of the image file can be reduced, thereby improving the image loading speed, saving the network transmission bandwidth and the storage cost; in addition, by RGB data and transparency in the image file.
  • the data is separately encoded, and the transparency data in the picture file is preserved while adopting the video coding mode, thereby ensuring the quality of the picture file.
  • FIG. 1 is a schematic diagram of an implementation environment of a method for processing a picture file according to an embodiment of the present application.
  • the computing device 10 is configured to implement the image file processing method provided by any embodiment of the present application.
  • the computing device 10 and the user terminal 20 are connected by a network 30, which may be a wired network or a wireless network.
  • FIG. 1b is a schematic diagram of an internal structure of a computing device 10 for implementing a method for processing a picture file according to an embodiment of the present application.
  • the computing device 10 includes a processor 100012, a non-volatile storage medium 100013, and an internal memory 100014 that are coupled by a system bus 100011.
  • the non-volatile storage medium 100013 of the computing device 10 stores an operating system 1000131, and further stores a picture file processing device 1000132, which is used to implement the picture file processing method provided by any embodiment of the present application.
  • the processor 100012 of the computing device 10 is configured to provide computing and control capabilities to support operation of the entire terminal device.
  • the internal memory 100014 in the computing device 10 provides an environment for the operation of the picture file processing device in the non-volatile storage medium 100013.
  • the internal memory 100014 can store computer readable instructions, which when executed by the processor 100012, can cause the processor 100012 to execute the picture file processing method provided by any embodiment of the present application.
  • the computing device 10 can be a terminal or a server.
  • the terminal may be a personal computer or a mobile electronic device including at least one of a mobile phone, a tablet, a personal digital assistant, or a wearable device.
  • the server can be implemented as a stand-alone server or a server cluster consisting of multiple physical servers. Those skilled in the art can understand that the structure shown in FIG.
  • FIG. 1b is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation of the computing device to which the solution of the present application is applied.
  • the specific computing device may More or fewer components than those shown in Figure 1b are included, or some components are combined, or have different component arrangements.
  • FIG. 1 is a schematic flowchart of a method for processing a picture file according to an embodiment of the present disclosure. The method may be performed by the foregoing computing device. As shown in FIG. 1c, the computing device is a terminal device, and the method in the embodiment of the present application may include steps 101 to 104.
  • Step 101 Acquire RGBA data corresponding to the first image in the picture file, and separate the RGBA data to obtain RGB data and transparency data of the first image.
  • the encoding device running in the terminal device acquires RGBA data corresponding to the first image in the picture file, and separates the RGBA data to obtain RGB data and transparency data of the first image.
  • the data corresponding to the first image is RGBA data.
  • the RGBA data is a color space representing red, green, blue, and transparency information (Alpha).
  • the RGBA data corresponding to the first image is separated into RGB data and transparency data.
  • the RGB data is color data included in the RGBA data
  • the transparency data is transparency data included in the RGBA data.
  • the first image composed of N pixels includes N RGBA data, in the form of:
  • the encoding apparatus needs to separate RGBA data of the first image to obtain RGB data and transparency data of the first image, for example, perform separation of the first image composed of the N pixels.
  • RGB data of each of the N pixel points and transparency data of each pixel point are obtained, and the form is as follows:
  • step 102 and step 103 are respectively performed.
  • Step 102 Encode RGB data of the first image according to a first video coding mode to generate first code stream data.
  • the encoding device encodes the RGB data of the first image according to the first video encoding mode to generate the first code stream data.
  • the first image may be a frame image included in a static format image file; or the first image may be any frame image included in a dynamic format image file.
  • Step 103 Encode transparency data of the first image according to a second video coding mode to generate second code stream data.
  • the encoding apparatus encodes the transparency data of the first image according to the second video encoding mode to generate second code stream data.
  • the first video coding mode or the second video coding mode may include, but is not limited to, an intra-prediction (I) frame coding mode and an inter prediction (Prediction, P).
  • Frame coding mode The I frame represents a key frame. When decoding the I frame data, only the current frame data is needed to reconstruct the complete image; the P frame needs to refer to the previous encoded frame to reconstruct the complete image.
  • the video coding mode adopted by each frame image in a static format image file or a dynamic format image file is not limited in the embodiment of the present application.
  • the RGB data and the transparency data of the first image are performed.
  • Frame coding For example, for a dynamic format image file, since the dynamic format image file generally includes at least two frames of images, in the embodiment of the present application, the RGB data of the first frame image in the dynamic format image file is The transparency data is subjected to I frame encoding; for RGB data and transparency data other than the first frame image, I frame encoding can be performed, and P frame encoding can also be performed.
  • Step 104 Write the first code stream data and the second code stream data into a code stream data segment of the picture file.
  • the encoding device writes the first code stream data generated by the RGB data of the first image and the second code stream data generated by the transparency data of the first image into the code stream data segment of the picture file.
  • the first code stream data and the second code stream data are complete code stream data corresponding to the first image, that is, the first image can be obtained by decoding the first code stream data and the second code stream data.
  • RGBA data RGBA data.
  • step 102 and step 103 are not in the order of execution.
  • the RGBA data input before encoding in the embodiment of the present application may be obtained by decoding image files of various formats, where the format of the image file may be Joint Photographic Experts Group (JPEG), Image file format (Bitmap, BMP), Portable Network Graphic Format (PNG), Animated Portable Network Graphics (APNG), Image Interchange Format (GIF), etc. Any of the embodiments of the present application does not limit the format of the picture file before encoding.
  • JPEG Joint Photographic Experts Group
  • Image file format Bitmap, BMP
  • PNG Portable Network Graphic Format
  • APNG Animated Portable Network Graphics
  • GIF Image Interchange Format
  • the first image in the embodiment of the present application is RGBA data including RGB data and transparency data
  • the encoding device may obtain the corresponding corresponding to the first image.
  • step 102 is performed on the RGB data to generate the first code stream data, and the first code stream data is determined as the complete code stream data corresponding to the first image, so that only the RGB data can be included through the video coding mode pair.
  • the first image is encoded to effect compression of the first image.
  • the encoding device acquires RGBA data corresponding to the first image in the image file, and obtains RGB data and transparency data of the first image by separating the RGBA data, Encoding the RGB data of the first image according to the first video coding mode to generate first code stream data; encoding the transparency data of the first image according to the second video coding mode to generate second code stream data; One stream of data and the second stream of data are written into the stream of data streams.
  • the compression ratio of the image file can be improved, and the size of the image file can be reduced, thereby improving the image loading speed, saving the network transmission bandwidth and the storage cost; in addition, by RGB data and transparency in the image file.
  • the data is separately encoded, and the transparency data in the picture file is preserved while adopting the video coding mode, thereby ensuring the quality of the picture file.
  • FIG. 2 is a schematic flowchart diagram of another method for processing a picture file according to an embodiment of the present disclosure.
  • the method may be performed by the foregoing computing device.
  • the computing device is a terminal device, and the method in the embodiment of the present application may include steps 201 to 207.
  • the embodiment of the present application is described by taking a picture file in a dynamic format as an example. For details, refer to the following.
  • Step 201 Acquire RGBA data corresponding to the first image corresponding to the kth frame in the dynamic format image file, and separate the RGBA data to obtain RGB data and transparency data of the first image.
  • the encoding device running in the terminal device acquires a picture file in a dynamic format to be encoded, where the picture file in the dynamic format includes at least two frames of images, and the encoding device acquires the kth frame in the picture file of the dynamic format.
  • the kth frame may be any one of the at least two frames of images, and k is a positive integer greater than 0.
  • the encoding apparatus may perform encoding according to the sequence of the image corresponding to each frame in the picture file of the dynamic format, that is, the first frame corresponding to the picture file of the dynamic format may be acquired first. image.
  • the embodiment of the present application does not limit the order in which the encoding device acquires the image included in the dynamic format image file.
  • the RGBA data is a color space representing Red, Green, Blue, and Alpha.
  • the RGBA data corresponding to the first image is separated into RGB data and transparency data.
  • each pixel corresponds to one RGBA data, and therefore, the first image composed of N pixels includes N RGBA data, and the form is as follows:
  • the encoding device needs to separate the RGBA data of the first image to obtain RGB data and transparency data of the first image. For example, after performing the separating operation on the first image composed of the N pixels, obtain N
  • the RGB data of each pixel in the pixel and the transparency data of each pixel are as follows:
  • step 202 and step 203 are respectively performed.
  • Step 202 Encode RGB data of the first image according to a first video coding mode to generate first code stream data.
  • the encoding apparatus encodes the RGB data of the first image according to the first video encoding mode to generate the first code stream data.
  • the RGB data is color data separated from RGBA data corresponding to the first image.
  • Step 203 Encode transparency data of the first image according to a second video coding mode to generate second code stream data.
  • the encoding apparatus encodes the transparency data of the first image according to the second video encoding mode to generate second code stream data.
  • the transparency data is separated from the RGBA data corresponding to the first image.
  • step 202 and step 203 are not in the order of execution.
  • Step 204 Write the first code stream data and the second code stream data into a code stream data segment of the picture file.
  • the encoding device writes the first code stream data generated by the RGB data of the first image and the second code stream data generated by the transparency data of the first image into the code stream data segment of the picture file.
  • the first code stream data and the second code stream data are complete code stream data corresponding to the first image, that is, the first image can be obtained by decoding the first code stream data and the second code stream data.
  • RGBA data RGBA data.
  • Step 205 Determine whether the kth frame is the last frame in the picture file of the dynamic format.
  • the encoding apparatus determines whether the kth frame is the last frame in the picture file of the dynamic format, and if it is the last frame, it indicates that the encoding of the picture file in the dynamic format is completed, and then step 207 is performed; If it is not the last frame, it means that there is still an unencoded image in the picture file of the dynamic format, and then step 206 is performed.
  • Step 206 If the kth frame is not the last frame in the picture file of the dynamic format, update k, and trigger execution of acquiring RGBA data corresponding to the first image corresponding to the kth frame in the picture file of the dynamic format, and The RGBA data is separated to obtain an operation of RGB data and transparency data of the first image.
  • the encoding device determines that the kth frame is not the last frame in the picture file of the dynamic format, the image corresponding to the next frame is encoded, that is, the value k is updated with (k+1).
  • the encoding device After updating k, triggering to perform RGBA data corresponding to the first image corresponding to the kth frame in the picture file of the dynamic format, and separating the RGBA data to obtain RGB data and transparency data of the first image. operating.
  • the image acquired by using the updated k is not the same image as the image acquired before the k update.
  • the image corresponding to the kth frame before the k update is set as the first image.
  • the image corresponding to the kth frame after the k update is set as the second image to facilitate the difference.
  • the RGBA data corresponding to the second image includes RGB data and transparency data
  • the encoding device pairs the second image according to the third video encoding mode.
  • the RGB data is encoded to generate third stream data
  • the transparency data of the second image is encoded according to a fourth video encoding mode to generate fourth stream data
  • the third stream data and the The fourth stream data is written into the stream data segment of the picture file.
  • the first video coding mode, the second video coding mode, the third video coding mode, or the fourth video coding mode involved may include, but is not limited to, an I frame coding mode and a P Frame coding mode.
  • the I frame represents a key frame.
  • the P frame needs to refer to the previous encoded frame to reconstruct the complete image.
  • the video coding mode adopted by the RGB data and the transparency data in each frame image in the dynamic format image file is not limited in the embodiment of the present application. For example, RGB data and transparency data in the same frame image may be encoded according to different video encoding modes; or, encoding may be performed in the same video encoding mode.
  • the RGB data in different frame images may be encoded according to different video coding modes; or, the same video coding mode may be used for encoding.
  • the transparency data in different frame images may be encoded according to different video coding modes; or, the same video coding mode may be used for encoding.
  • the image file of the dynamic format includes a plurality of code stream data segments.
  • one frame image corresponds to one code stream data segment; or, in other embodiments of the present application, One code stream data corresponds to one code stream data segment. Therefore, the code stream data segment written by the first code stream data and the second code stream data is different from the code stream data segment written by the third code stream data and the fourth code stream data.
  • FIG. 3 is an exemplary diagram of a multi-frame image included in a dynamic format image file provided by an embodiment of the present application.
  • FIG. 3 is a description of a picture file in a dynamic format
  • the picture file of the dynamic format includes a multi-frame image, for example, an image corresponding to the first frame, an image corresponding to the second frame, and a third frame.
  • Corresponding image, image corresponding to the fourth frame, and the like, wherein the image corresponding to each frame includes RGB data and transparency data.
  • the encoding apparatus may encode the RGB data and the transparency data in the image corresponding to the first frame according to the I frame encoding mode, and the second frame, the third frame, the fourth frame, and the like.
  • the images corresponding to the frames are encoded according to the P frame encoding mode.
  • the RGB data in the image corresponding to the second frame is encoded according to the P frame encoding mode, and the RGB data in the corresponding image of the first frame needs to be referred to, and the image corresponding to the second frame is in the corresponding image.
  • the transparency data is encoded according to the P frame encoding mode, and needs to refer to the transparency data in the corresponding image of the first frame, and so on, and other frames such as the third frame and the fourth frame can be encoded by referring to the second frame in the P frame encoding mode.
  • the dynamic format image file is encoded by an optional coding scheme; or the encoding apparatus may also be used for the first frame, the second frame, the third frame, the fourth frame, and the like. Both are encoded in an I frame coding mode.
  • Step 207 If the kth frame is the last frame in the picture file of the dynamic format, complete encoding of the picture file in the dynamic format.
  • the encoding apparatus determines that the kth frame is the last frame in the picture file of the dynamic format, and indicates that the picture file encoding of the dynamic format is completed.
  • the encoding apparatus may generate frame header information for code stream data generated by an image corresponding to each frame, and generate image header information for the dynamic format image file, so that the image header can be
  • the information determines whether the picture file contains transparency data, and further determines whether the first code stream data generated by the RGB data is acquired only in the decoding process, or the first code stream data generated by the RGB data and the second stream generated by the transparency data are acquired. Code stream data.
  • the image corresponding to each frame in the dynamic format image file of the embodiment of the present application is RGBA data including RGB data and transparency data
  • the image corresponding to each frame in the dynamic format image file only includes RGB.
  • the encoding device may perform step 202 on the RGB data of each frame image to generate the first code stream data and write the first code stream data into the code stream data segment of the picture file, and finally The first stream data is determined as the complete stream data corresponding to the first image, so that the first image containing only the RGB data can still be encoded by the video encoding mode to achieve compression of the first image.
  • the RGBA data input before encoding in the embodiment of the present application may be obtained by decoding image files of various dynamic formats, where the dynamic format of the image file may be any of APNG, GIF, and the like.
  • the dynamic format of the picture file before encoding is not limited in this embodiment of the present application.
  • the encoding device acquires RGBA data corresponding to the first image in the image file, and obtains the first image by separating RGBA data.
  • RGB data and transparency data encoding RGB data of the first image according to a first video coding mode to generate first code stream data; encoding transparency data of the first image according to a second video coding mode, generating a first Two code stream data; the first code stream data and the second code stream data are written into the code stream data segment.
  • the image corresponding to each frame in the dynamic format image file can be encoded in the manner of the first image.
  • the compression ratio of the image file can be improved, and the size of the image file can be reduced, thereby improving the image loading speed, saving the network transmission bandwidth and the storage cost; in addition, by RGB data and transparency in the image file.
  • the data is separately encoded, and the transparency data in the picture file is preserved while adopting the video coding mode, thereby ensuring the quality of the picture file.
  • FIG. 4 is a schematic flowchart diagram of another method for processing a picture file according to an embodiment of the present disclosure, which may be performed by the foregoing computing device.
  • the computing device is a terminal device, and the method in the embodiment of the present application may include steps 301 to 307.
  • Step 301 Acquire RGBA data corresponding to the first image in the picture file, and separate the RGBA data to obtain RGB data and transparency data of the first image.
  • the encoding device running in the terminal device acquires RGBA data corresponding to the first image in the picture file, and separates the RGBA data to obtain RGB data and transparency data of the first image.
  • the data corresponding to the first image is RGBA data.
  • RGBA data is a color space representing Red, Green, Blue, and Alpha.
  • the RGBA data corresponding to the first image is separated into RGB data and transparency data.
  • the RGB data is color data included in the RGBA data
  • the transparency data is transparency data included in the RGBA data.
  • the first image composed of N pixels includes N RGBA data, in the form of:
  • the encoding apparatus needs to separate RGBA data of the first image to obtain RGB data and transparency data of the first image, for example, perform separation of the first image composed of the N pixels.
  • RGB data of each of the N pixel points and transparency data of each pixel point are obtained, and the form is as follows:
  • step 302 and step 303 are respectively performed.
  • Step 302 Encode RGB data of the first image according to a first video coding mode to generate first code stream data.
  • the encoding device encodes the RGB data of the first image according to the first video encoding mode to generate the first code stream data.
  • the first image may be a frame image included in a static format image file; or the first image may be any frame image included in a dynamic format image file.
  • the encoding process is configured to encode the RGB data of the first image according to the first video encoding mode and generate the first code stream data by converting the RGB data of the first image into First YUV data; encoding the first YUV data according to a first video coding mode to generate first code stream data.
  • the encoding device may convert the RGB data into first YUV data in a preset YUV color space format, for example, the preset YUV color space format may include, but is not limited to, YUV420, YUV422, and YUV444.
  • Step 303 Encode transparency data of the first image according to a second video coding mode to generate second code stream data.
  • the encoding apparatus encodes the transparency data of the first image according to the second video encoding mode to generate second code stream data.
  • the first video encoding mode for step 302 or the second video encoding mode of step 303 may include, but is not limited to, an I frame encoding mode and a P frame encoding mode.
  • the I frame represents a key frame.
  • the P frame needs to refer to the previous encoded frame to reconstruct the complete image.
  • the video coding mode adopted by each frame image in a static format image file or a dynamic format image file is not limited in the embodiment of the present application.
  • the RGB data and the transparency data of the first image are performed.
  • Frame coding For example, for a dynamic format image file, since the dynamic format image file includes at least two frames of images, in the embodiment of the present application, the RGB data and transparency of the first frame image in the dynamic format image file are The data is subjected to I frame encoding; for the RGB data and the transparency data of the non-first frame image, I frame encoding may be performed, or P frame encoding may also be performed.
  • the specific process of encoding, by the encoding device, the transparency data of the first image according to the second video encoding mode and generating the second code stream data is: using the transparency data of the first image Converting to the second YUV data; encoding the second YUV data according to the second video encoding mode to generate the second code stream data.
  • the encoding device converts the transparency data of the first image into the second YUV data. Specifically, in some embodiments of the present application, the encoding device sets the transparency data of the first image to the second. a Y component in the YUV data, and the U component and the V component in the second YUV data are not set; or, in other embodiments of the present application, the transparency data of the first image is set to the second.
  • the Y component in the YUV data, and the U component and the V component in the second YUV data are set as preset data; in the embodiment of the present application, the encoding device may be in a preset YUV color space format.
  • the transparency data is converted into second YUV data.
  • the preset YUV color space format may include, but is not limited to, YUV400, YUV420, YUV422, and YUV444, and the U component and the V component may be set according to the YUV color space format.
  • the encoding device obtains RGB data and transparency data of the first image by separating the RGBA data of the first image.
  • the following is an example of converting the RGB data of the first image into the first YUV data and converting the transparency data of the first image into the second YUV data.
  • the first image includes four pixel points as an example for description.
  • the RGB data of the image is the RGB data of the four pixels
  • the transparency data of the first image is the transparency data of the four pixels
  • FIG. 4b An illustration of Figure 4d.
  • FIG. 4b is a diagram showing an example of RGB data to YUV data provided by an embodiment of the present application.
  • the RGB data includes RGB data of 4 pixel points, and the RGB data of 4 pixel points is converted according to the color space conversion mode. If the YUV color space format is YUV444, the corresponding conversion formula is used.
  • the RGB data of the pixel can be converted into a YUV data, so that the RGB data of the four pixels is converted into four YUV data, and the first YUV data contains the four YUV data.
  • the conversion formulas corresponding to different YUV color space formats are different.
  • FIG. 4c and FIG. 4d are diagrams of an example of transparency data to YUV data provided by an embodiment of the present application.
  • the transparency data contains A data of 4 pixels, where A represents transparency, and the transparency data of each pixel is set to the Y component; then the YUV color space format is determined to determine the Two YUV data.
  • the U and V components are not set, and the Y component of the 4 pixel points is determined as the second YUV data of the first image (as shown in FIG. 4c).
  • the U and V components are set as preset data, as shown in FIG. 4d, and FIG. 4d is in the color space format of YUV444. Converted, that is, each pixel is set to a U component and a V component of the preset data.
  • the YUV color space format is YUV422
  • a U component and a V component are set for each pixel point as preset data; or, if the YUV color space format is YUV420, one for each four pixel points is set. It is the U component and the V component of the preset data.
  • Other formats are deduced by analogy, and are not described here again; finally, the YUV data of 4 pixels is determined as the second YUV data of the first image.
  • step 302 and step 303 are not sequential in the execution process.
  • Step 304 Write the first code stream data and the second code stream data into a code stream data segment of the picture file.
  • the encoding device writes the first code stream data generated by the RGB data of the first image and the second code stream data generated by the transparency data of the first image into the code stream data segment of the picture file.
  • the first code stream data and the second code stream data are complete code stream data corresponding to the first image, that is, the RGBA of the first image can be obtained by decoding the first code stream data and the second code stream data. data.
  • Step 305 Generate picture header information and frame header information corresponding to the picture file.
  • the encoding apparatus generates picture header information and frame header information corresponding to the picture file.
  • the image file may be a static format image file, that is, only the first image is included; or the image file is a dynamic format image file, that is, the first image and other images are included.
  • the encoding device needs to generate picture header information corresponding to the picture file.
  • the picture header information includes image feature information indicating whether the picture file has transparency data, so that the decoding device determines, by the image feature information, whether the picture file includes transparency data, thereby determining how to obtain code stream data and acquiring Whether the obtained code stream data contains the second code stream data generated by the transparency data.
  • the frame header information is used to indicate a code stream data segment of the picture file, so that the decoding device determines, by using the frame header information, a code stream data segment that can acquire the code stream data, thereby implementing decoding of the code stream data.
  • sequence of the picture header information and the frame header information corresponding to the picture file and the steps 302, 303, and 304 are not limited.
  • Step 306 Write the picture header information into a picture header information data segment of the picture file.
  • the encoding device writes the picture header information into a picture header information data segment of the picture file.
  • the picture header information includes an image file identifier, a decoder identifier, a version number, and the image feature information;
  • the image file identifier is used to indicate a type of the picture file, and the decoder identifier is used by And an identifier for indicating a codec standard used by the picture file;
  • the version number is used to indicate a level of a codec standard used by the picture file.
  • the picture header information may further include a user-defined information data segment, where the user-defined information data segment includes the user-defined information start code, and the user-defined information data segment.
  • the user-defined information includes Exchangeable Image File (EXIF) information, such as aperture, shutter, white balance, International Organization for Standardization (ISO) , focal length, date and time, shooting conditions, camera brand, model, color coding, sound recorded during shooting, GPS data, thumbnails, etc.
  • EXIF Exchangeable Image File
  • ISO International Organization for Standardization
  • User-defined information contains information that can be customized by the user. This embodiment of the present application does not limit this.
  • the image feature information further includes the image feature information start code, the image feature information data segment length, whether the image file is a static format image file, and whether the image file is a dynamic format image file. Whether the picture file is lossless coding, a YUV color space value field adopted by the picture file, a width of the picture file, a height of the picture file, and an image file for indicating that the picture file is in a dynamic format. The number of frames.
  • the image feature information may further include a YUV color space format adopted by the picture file.
  • FIG. 5a is an exemplary diagram of picture header information provided by an embodiment of the present application.
  • the picture header information of a picture file is composed of an image sequence header data segment, an image feature information data segment, and a user-defined information data segment.
  • the image sequence header data segment includes an image file identifier, a decoder identifier, a version number, and the image feature information.
  • Image file identifier used to indicate the type of the image file, which can be represented by a preset identifier.
  • image_identifier occupies 4 bytes.
  • the image file identifier is a bit string 'AVSP'. To identify this is an AVS image file.
  • Decoder identifier An identifier used to indicate the codec standard used to compress the current picture file, for example, in 4 bytes. Or it can be interpreted as indicating the decoder core model used for the current picture decoding. When the AVS2 core is used, the decoder identifier code_id is 'AVS2'.
  • Version number used to indicate the level of the codec standard indicated by the compression standard identifier.
  • the level may include a Baseline Profile, a Main Profile, an Extended Profile, and the like.
  • an 8-bit unsigned number identifier is used, as shown in Table 1, which gives the type of the version number.
  • the image feature information data segment includes an image feature information start code and an image feature information data segment length.
  • an alpha channel mark ie, the image transparency mark shown in Figure 5b
  • a moving image mark ie, the image transparency mark shown in Figure 5b
  • a YUV color space format ie, the image transparency mark shown in Figure 5b
  • a lossless mode mark ie, the image transparency mark shown in Figure 5b
  • a YUV color space range mark ie, a YUV color space format
  • a reserved bit an image width, an image height, and a frame number .
  • the image feature information start code is a field for indicating the start position of the image feature information data segment of the picture file, for example, represented by 1 byte, and the field D0 is employed.
  • Image feature information data segment length indicates the number of bytes occupied by the image feature information data segment, for example, represented by 2 bytes.
  • Image feature information data segment length indicates the number of bytes occupied by the image feature information data segment, for example, represented by 2 bytes.
  • the image feature information data segment in FIG. 5b is shared. 9 bytes, can be filled in 9; for the static format image file, the image feature information data segment in Figure 5b has a total of 12 bytes, which can be filled in 12.
  • Image transparency flag used to indicate whether the image in the image file carries transparency data. For example, a bit representation is used, 0 means that the image in the picture file does not carry transparency data, and 1 means that the image in the picture file carries transparency data; it can be understood whether there is an alpha channel and whether or not the transparency data represents the same the meaning of.
  • Dynamic image flag used to indicate whether the picture file is a dynamic format picture file and whether it is a static format picture file, for example, one bit representation, 0 means a static format picture file, 1 means a dynamic format picture file.
  • YUV color space format The chroma component format used to indicate the conversion of RGB data of a picture file into YUV data, for example, represented by two bits, as shown in Table 2 below.
  • YUV_ color space format value YUV color space format 00 4:0:0 01 4:2:0 10 4:2:2 (reserved) 11 4:4:4
  • Lossless mode flag used to indicate whether it is lossless coding or lossy compression, for example, using one bit representation, 0 means lossy coding, 1 means lossless coding, wherein video coding mode is directly adopted for RGB data in picture files. For encoding, it means lossless encoding; for RGB data in the image file, first converting to YUV data, and then encoding YUV data, indicating lossy encoding.
  • YUV color space value field flag used to indicate that the YUV color space range is in compliance with the ITU-R BT.601 standard. For example, one bit representation is used, 1 indicates that the Y component has a range of [16, 235], and U and V components have a range of [16, 240]; 0 indicates that the Y component and U and V components have a range of [0, 255].
  • Reserved bits 10-bit unsigned integer. The extra bits in the byte are set as reserved bits.
  • Image Width Used to indicate the width of each image in the image file. For example, if the image width ranges from 0 to 65535, it can be represented by 2 bytes.
  • Image height used to indicate the height of each image in the image file, for example, or, if the image height ranges from 0 to 65535, it can be represented by 2 bytes.
  • Number of image frames It only exists in the case of a dynamic format image file. It is used to indicate the total number of frames included in the image file, for example, in 3 bytes.
  • FIG. 5c an example of a user-defined information data segment is provided in the embodiment of the present application, as shown in FIG. 5c.
  • FIG. 5c For details, refer to the following detailed description.
  • User-defined information start code is a field for indicating the start position of the user-defined information, for example, 1 byte, for example, the bit string '0x000001BC' identifies the start of the user-defined information.
  • User-defined information data segment length indicates the data length of the current user-defined information, for example, in 2 bytes.
  • User-defined information used to write data that the user needs to pass in, such as EXIF.
  • the number of bytes occupied can be determined according to the length of the user-defined information.
  • Step 307 Write the frame header information into a header information data segment of the picture file.
  • the encoding device writes the frame header information into a header information data segment of the picture file.
  • one frame image of the picture file corresponds to one frame header information.
  • the static format image file includes one frame image, that is, the first image. Therefore, the static format image file includes a frame header information.
  • the picture file of the dynamic format generally includes at least two frames of images, and one frame header information is added for each of the frames.
  • FIG. 6 is a diagram showing an example of encapsulation of a picture file in a static format according to an embodiment of the present application.
  • the picture file includes a picture header information data segment, a frame header information data segment, and a code stream data segment.
  • a static format image file includes picture header information, frame header information, and code stream data representing an image of the picture file, where the code stream data includes first stream data generated by RGB data of the frame image and images from the frame The second stream data generated by the transparency data.
  • Each piece of information or data is written into the corresponding data segment, for example, the picture header information is written into the picture header information data segment; the frame header information is written into the frame header information data segment; and the code stream data is written into the code stream data segment.
  • the code stream data segment can be described by using a video frame data segment, so that the video frame data is used.
  • the information written in the segment is the first code stream data and the second code stream data obtained by encoding the image file in the static format.
  • FIG. 6 is a schematic diagram of a package of a picture file in a dynamic format according to an embodiment of the present application.
  • the picture file includes a picture header information data segment, a plurality of frame header information data segments, and a plurality of code stream data segments.
  • a dynamic format picture file includes picture header information, a plurality of header information, and code stream data representing a plurality of frames of images.
  • the code stream data corresponding to one frame image corresponds to one frame header information, wherein the code stream data representing each frame image includes first code stream data generated by RGB data of the frame image and transparency data from the frame image. The generated second stream data.
  • Writing each piece of information or data into the corresponding data segment for example, writing the picture header information into the picture header information data segment; writing the frame header information corresponding to the first frame to the frame header information data segment corresponding to the first frame;
  • the code stream data corresponding to the first frame is written into the code stream data segment corresponding to the first frame, and so on, the frame header information corresponding to the multiple frames is written into the header information segment corresponding to each frame, and the multiple frames are corresponding.
  • the code stream data is written in the code stream data segment corresponding to each frame. It should be noted that, since the first code stream data and the second code stream data in the code stream data segment are obtained through a video coding mode, the code stream data segment can be described by using a video frame data segment, so that each frame is used.
  • the information written in the video frame data segment corresponding to the image is the second code stream data of the first code stream data obtained by encoding the frame image.
  • one code stream data in one frame of the picture file corresponds to one frame header information.
  • the static format image file includes one frame image, that is, the first image, and the first image including the transparency data corresponds to two code stream data, which are respectively the first code stream data. And the second code stream data. Therefore, the first code stream data in the static format image file corresponds to one frame header information, and the second code stream data corresponds to another frame header information.
  • the dynamic format picture file contains at least two frames of images, and each frame image containing transparency data corresponds to two code stream data, which are the first code stream data and the second code stream data, respectively. And adding a frame header information to each of the first stream data and the second stream data of each frame image.
  • FIG. 7 is a diagram showing an example of encapsulation of another static format image file according to an embodiment of the present application.
  • the image frame header information and the transparent channel frame header information are distinguished here, wherein the first code generated by the RGB data is used.
  • the stream data corresponds to the image frame header information
  • the second stream data generated by the transparency data corresponds to the transparent channel frame header information.
  • the picture file includes a picture header information data segment, an image frame header information data segment corresponding to the first code stream data, a first code stream data segment, and a transparent channel frame header information data corresponding to the second code stream data. Segment, second stream data segment.
  • a static format picture file includes picture header information, two frame header information, and first code stream data and second code stream data representing one frame image, wherein the first code stream data is generated from RGB data of the frame image
  • the second stream data is generated from the transparency data of the frame image.
  • Write each information or data into the corresponding data segment for example, write the picture header information into the picture header information data segment; and write the image frame header information corresponding to the first code stream data into the image frame corresponding to the first code stream data.
  • a header information data segment; the first code stream data is written into the first code stream data segment; and the transparent channel frame header information corresponding to the second code stream data is written into the transparent channel frame header information data segment corresponding to the second code stream data;
  • the second code stream data is written to the second code stream data segment.
  • the image frame header information data segment and the first code stream data segment corresponding to the first code stream data may be set as an image frame data segment, and the second channel stream data corresponds to the transparent channel frame header information data.
  • the segment and the second stream data segment can be set as the transparent channel frame data segment.
  • the name of each data segment and the data segment name combined with each data segment are not limited in this embodiment of the present application.
  • the encoding apparatus may arrange the frame header corresponding to the first code stream data according to a preset order.
  • the frame header information data segment corresponding to the segment and each code stream data may be according to the frame header information data segment corresponding to the first code stream data, the first code stream data segment, and the frame header information data segment corresponding to the second code stream data
  • the two code stream data segments are arranged such that in the decoding process of the decoding device, it is possible to determine which of the two frame header information indicating the frame image and the code stream data segment indicated by the two frame headers can obtain the first code.
  • Stream data which one can get the second stream data. It can be understood that the first code stream data herein
  • FIG. 7 is a diagram showing an example of encapsulation of another dynamic format image file according to an embodiment of the present application.
  • the image frame header information and the transparent channel frame header information are distinguished here, wherein the first code generated by the RGB data is used.
  • the stream data corresponds to the image frame header information
  • the second stream data generated by the transparency data corresponds to the transparent channel frame header information.
  • the picture file includes a picture header information data segment, a plurality of frame header information data segments, and a plurality of code stream data segments.
  • a dynamic format picture file includes picture header information, a plurality of header information, and code stream data representing a plurality of frames of images.
  • the first code stream data and the second code stream data corresponding to one frame image respectively correspond to one frame header information, wherein the first code stream data is generated by RGB data of the frame image, and the second code stream data is The transparency data of the frame image is generated.
  • the image frame header information data segment and the first code stream data segment corresponding to the first code stream data may be set as an image frame data segment, and the second channel stream data corresponds to the transparent channel frame header information data.
  • the segment and the second stream data segment can be set as the transparent channel frame data segment.
  • the name of each data segment and the data segment name combined with each data segment are not limited in this embodiment of the present application.
  • the frame header information includes the frame header information start code and delay time information for indicating a picture file if the picture file is in a dynamic format.
  • the frame header information further includes at least one of a length of the frame header information data segment and a code stream data segment length of the code stream data segment indicated by the frame header information.
  • the frame header information further includes unique information that is different from other frame images, such as the coding area information, the transparency information, the color table, and the like, which is not limited by the embodiment of the present application.
  • the frame header information may refer to an example diagram of the frame header information shown in FIG. 8a, as shown in FIG. 8a. As shown, please see the specifics below.
  • Header information start code is a field for indicating the start position of the frame header information, for example, 1 byte.
  • Frame header information data segment length indicates the length of the frame header information, for example, expressed in 1 byte, which is optional information.
  • a code stream data segment length a code stream length indicating a code stream data segment indicated by the frame header information, wherein, for the case where the first code stream data and the second code stream data correspond to one frame header information, where The code stream length is the sum of the length of the first code stream data and the length of the second code stream data, and the information is optional information.
  • Delay time information only exists when the picture file is a picture file in a dynamic format, indicating that the time difference between the image corresponding to the current frame and the image corresponding to the next frame is displayed, for example, 1 byte.
  • the frame header information is divided into image frame header information and transparent channel frame header information, please refer to FIG. 8b and FIG. 8c together.
  • the image frame header information includes the image frame header information start code and delay time information for indicating a picture file if the picture file is in a dynamic format.
  • the image frame header information further includes the length of the image frame header information data segment and the length of the first code stream data segment of the first code stream data segment indicated by the image frame header information. At least one of them.
  • the image frame header information further includes unique information that is different from other frame images, such as the coding area information, the transparency information, the color table, and the like, which are not limited in this embodiment of the present application.
  • Image header information Start code is a field for indicating the start position of the image header information, for example, expressed in 1 byte, such as bit string '0x000001BA'.
  • Image frame header information data segment length indicates the length of the image frame header information, for example, represented by 1 byte, which is optional information.
  • the first code stream data segment length indicates the code stream length of the first code stream data segment indicated by the image frame header information, and the information is optional information.
  • Delay time information only exists when the picture file is a picture file in a dynamic format, indicating that the time difference between the image corresponding to the current frame and the image corresponding to the next frame is displayed, for example, represented by 1 byte.
  • the transparent channel frame header information includes the transparent channel frame header information start code.
  • the transparent channel frame header information further includes a length of the transparent channel frame header information data segment, and a second code stream data of the second code stream data segment indicated by the transparent channel frame header information. a segment length and at least one of delay time information for indicating a picture file if the picture file is in a dynamic format.
  • the transparent channel frame header information further includes unique information that is different from other frame images, such as coding area information, transparency information, color table, and the like, which is not limited in this embodiment of the present application. .
  • Transparent channel frame header information start code is a field for indicating the start position of the transparent channel frame header information, for example, represented by 1 byte, such as bit string '0x000001BB'.
  • Transparent channel frame header information data segment length indicates the length of the transparent channel frame header information. For example, it is represented by 1 byte, and the information is optional information.
  • the second code stream data segment length indicates the code stream length of the second code stream data segment indicated by the transparent channel frame header information, and the information is optional information.
  • Delay time information only exists when the picture file is a picture file in a dynamic format, indicating that the time difference between the image corresponding to the current frame and the image corresponding to the next frame is displayed, for example, represented by 1 byte. This information is optional.
  • the transparent channel frame header information may refer to delay time information in the image frame header information without including delay time information.
  • the picture file, the image, the first code stream data, the second code stream data, the picture header information, the frame header information, and the information included in the picture header information, and the information included in the frame header information may be used. Appearing in other names, for example, the picture file is described by "picture", as long as the function of each word is similar to the present application, it is within the scope of the claims of the present application and its equivalent technology.
  • the RGBA data input before encoding in the embodiment of the present application may be obtained by decoding image files of various formats, where the format of the image file may be JPEG, BMP, PNG, APNG, GIF, etc.
  • the format of the picture file before encoding is not limited.
  • each start code in the embodiment of the present application is unique in the entire compressed image data to play the role of uniquely identifying each data segment.
  • the picture file involved in the embodiment of the present application is used to represent a complete picture file or image file, which may contain one or more images, and the image refers to a frame picture.
  • the video frame data involved in the embodiment of the present application is code stream data obtained by video encoding each frame image in the image file.
  • the first code stream data obtained after encoding the RGB data can be regarded as a video frame.
  • Data, the second stream data obtained after encoding the transparency data can also be regarded as a video frame data.
  • the encoding device acquires RGBA data corresponding to the first image in the image file, and obtains RGB data and transparency data of the first image by separating the RGBA data, And encoding the RGB data of the first image according to the first video coding mode to generate first code stream data; encoding the transparency data of the first image according to the second video coding mode, generating second code stream data; and generating The picture header information and the frame header information corresponding to the picture file of the first image are included; finally, the first code stream data and the second code stream data are written into the code stream data segment, and the picture header information is written into the picture header information data segment, The header information is written to the header information data segment.
  • the compression ratio of the image file can be improved, and the size of the image file can be reduced, thereby improving the image loading speed, saving the network transmission bandwidth and the storage cost; in addition, by RGB data and transparency in the image file.
  • the data is separately encoded, and the transparency data in the picture file is preserved while adopting the video coding mode, thereby ensuring the quality of the picture file.
  • FIG. 9 is a schematic flowchart diagram of a method for processing a picture file according to an embodiment of the present disclosure, where the method may be performed by the foregoing computing device.
  • the computing device is a terminal device, and the method in the embodiment of the present application may include steps 401 to 404.
  • Step 401 Acquire first code stream data and second code stream data generated by the first image in the picture file from a code stream data segment of the picture file.
  • the decoding device running in the terminal device acquires the first code stream data and the second code stream data generated by the first image in the picture file from the code stream data segment of the picture file.
  • Step 402 Decode the first code stream data according to the first video decoding mode to generate RGB data of the first image.
  • the decoding device running in the terminal device decodes the first code stream data according to the first video decoding mode.
  • the first code stream data and the second code stream data are data generated by the decoding device from the code stream data segment by parsing the picture file, and acquired The code stream data of the first image, the first image being an image included in the image file.
  • the decoding device acquires first code stream data and second code stream data representing the first image.
  • the first image may be a frame image included in a static format image file; or the first image may be any frame image included in a dynamic format image file.
  • the picture file includes RGB data and transparency data
  • information for indicating a stream of data streams is present in the picture file
  • Information for indicating a code stream data segment corresponding to different frame images exists in the picture file
  • the decoding device decodes the first code stream data to generate RGB data of the first image.
  • Step 403 Decode the second code stream data according to the second video decoding mode to generate transparency data of the first image.
  • the decoding device decodes the second code stream data according to the second video decoding mode to generate transparency data of the first image.
  • the second code stream data is the same as the first code stream data in the step 402, and is not described here.
  • the first video decoding mode or the second video decoding mode may be determined according to a video encoding mode used to generate the first code stream data or generate the second code stream data, for example, The first code stream data is taken as an example. If the first code stream data is encoded by I frame, the first video decoding mode may generate RGB data according to the current code stream data; if the first code The stream data is encoded by P frame, and the first video decoding mode is to generate RGB data of the current frame according to the previously decoded data.
  • the second video decoding mode may refer to the introduction of the first video decoding mode, which is not described herein.
  • step 402 and step 403 are not in the order of execution.
  • Step 404 Generate RGBA data corresponding to the first image according to the RGB data of the first image and the transparency data.
  • the decoding device generates RGBA data corresponding to the first image according to the RGB data of the first image and the transparency data.
  • RGBA data is a color space representing Red, Green, Blue, and Alpha.
  • RGB data and transparency data can be synthesized into RGBA data.
  • the code stream data encoded according to the video coding mode can be used to generate corresponding RGBA data through the corresponding video decoding mode, thereby realizing the use of the video codec mode while preserving the transparency data in the picture file, and ensuring the picture file. Quality and display effects.
  • the decoding device decodes the obtained RGB data and transparency data of the first image in the following form:
  • the decoding device combines the corresponding RGB data and transparency data to obtain RGBA data of the first image, and the form is as follows:
  • the picture file in the embodiment of the present application is a case where the RGB data and the transparency data are included, so that the first code stream data that can generate the RGB data and the second code that generates the transparency data can be read by parsing the picture file.
  • the data is streamed, and steps 402 and 403 are performed separately.
  • the first code stream data that can generate RGB data can be read by parsing the picture file, and step 402 is performed to generate the RGB data, that is, the decoding of the first code stream data is completed.
  • the decoding device decodes the first code stream data according to the first video decoding mode to generate RGB data of the first image, and decodes the second code stream data according to the second video decoding mode to generate the first Transparency data of the image; generating RGBA data corresponding to the first image according to the RGB data and the transparency data of the first image.
  • the RGBA data is obtained by decoding the first code stream data and the second code stream data in the picture file respectively, thereby realizing the use of the video codec mode while preserving the transparency data in the picture file, thereby ensuring the picture file. the quality of.
  • FIG. 10 is a schematic flowchart diagram of another method for processing a picture file according to an embodiment of the present disclosure, where the method may be performed by the foregoing computing device.
  • the computing device is a terminal device, and the method in the embodiment of the present application may include steps 501 to 507.
  • the embodiment of the present application is described by taking a picture file in a dynamic format as an example. For details, refer to the following.
  • Step 501 Acquire first code stream data and second code stream data generated by the first image corresponding to the kth frame in the picture file in the dynamic format.
  • the decoding device running in the terminal device obtains the first code stream data generated by the first image corresponding to the kth frame from the code stream data segment of the picture file by parsing the picture file in the dynamic format. Second stream data.
  • the decoding device acquires the first code stream data and the second code stream data indicating the first image.
  • the dynamic format image file includes at least two frames of images, and the kth frame may be any one of the at least two frames of images. Where k is a positive integer greater than zero.
  • a picture file of a dynamic format includes RGB data and transparency data, and information for indicating a code stream data segment corresponding to a different frame image exists in the picture file, so that the decoding device
  • the first code stream data generated from the RGB data of the first image and the second code stream data generated from the transparency data of the first image can be acquired.
  • the decoding apparatus may perform decoding according to the sequence of the code stream data corresponding to each frame in the picture file of the dynamic format, that is, the first picture file of the dynamic format may be acquired first.
  • the code stream data corresponding to the frame is decoded.
  • the embodiment of the present application does not limit the order in which the decoding device acquires the code stream data of each frame image of the picture file in the dynamic format.
  • the decoding apparatus may determine the code stream data indicating the image corresponding to each frame by using the picture header information and the frame header information of the picture file, and refer to the picture header information and the frame in the next embodiment.
  • the specific introduction of the header information may be used to determine the code stream data indicating the image corresponding to each frame by using the picture header information and the frame header information of the picture file, and refer to the picture header information and the frame in the next embodiment. The specific introduction of the header information.
  • Step 502 Decode the first code stream data according to the first video decoding mode to generate RGB data of the first image.
  • the decoding apparatus decodes the first code stream data according to the first video decoding mode to generate RGB data of the first image.
  • the decoding apparatus decodes the first code stream data according to a first video decoding mode to generate first YUV data of a first image; convert the first YUV data into the The RGB data of the first image.
  • Step 503 Decode the second code stream data according to the second video decoding mode to generate transparency data of the first image.
  • the decoding device decodes the second code stream data according to the second video decoding mode to generate transparency data of the first image.
  • the decoding apparatus decodes the second code stream data according to a second video decoding mode to generate second YUV data of the first image; and converts the second YUV data into Transparency data of the first image.
  • the decoding device sets a Y component of the second YUV data to the transparency data of the first image, and discards a U component and the second YUV data. V component.
  • step 502 and step 503 are not in the order of execution.
  • Step 504 Generate RGBA data corresponding to the first image according to the RGB data of the first image and the transparency data.
  • the decoding device generates RGBA data corresponding to the first image according to the RGB data of the first image and the transparency data.
  • RGBA data is a color space representing Red, Green, Blue, and Alpha.
  • RGB data and transparency data can be synthesized into RGBA data.
  • the code stream data encoded according to the video coding mode can be used to generate corresponding RGBA data through the corresponding video decoding mode, thereby realizing the use of the video codec mode while preserving the transparency data in the picture file, and ensuring the picture file. Quality and display effects.
  • the decoding device decodes the obtained RGB data and transparency data of the first image in the following form:
  • the decoding device combines the corresponding RGB data and transparency data to obtain RGBA data of the first image, and the form is as follows:
  • Step 505 Determine whether the kth frame is the last frame of the picture file in the dynamic format.
  • the decoding apparatus determines whether the kth frame is the last frame of the picture file of the dynamic format. In some embodiments of the present application, whether to decode the picture file may be determined by detecting the number of frames included in the picture header information. If the kth frame is the last frame of the picture file of the dynamic format, indicating that the decoding of the picture file of the dynamic format is completed, step 507 is performed; if the kth frame is not the last picture of the picture file of the dynamic format. For the frame, step 506 is performed.
  • Step 506 If the kth frame is not the last frame of the picture file of the dynamic format, update k, and trigger execution of acquiring the first code stream data of the first image corresponding to the kth frame in the picture file of the dynamic format. The operation of two streams of data.
  • the decoding apparatus determines that the kth frame is not the last frame of the picture file of the dynamic format, decoding the code stream data of the image corresponding to the next frame, that is, updating the value of (k+1) k. After updating k, an operation of acquiring the first code stream data and the second code stream data of the first image corresponding to the kth frame in the picture file in the dynamic format is triggered.
  • the image acquired by using the updated k is not the same image as the image acquired before the k update.
  • the image corresponding to the kth frame before the k update is set as the first image.
  • the image corresponding to the kth frame after the k update is set as the second image to facilitate the difference.
  • the code stream data representing the second image is the third code stream data and the fourth code stream data.
  • the first video decoding mode, the second video decoding mode, the third video decoding mode, or the fourth video decoding mode involved in the foregoing is a video encoding mode adopted according to the generated code stream data.
  • the first code stream data is taken as an example. If the first code stream data is encoded by I frame, the first video decoding mode may generate RGB data according to the current code stream data; The first code stream data is encoded by P frame, and the first video decoding mode is to generate RGB data of the current frame according to the previously decoded data.
  • the image file of the dynamic format includes a plurality of code stream data segments.
  • one frame image corresponds to one code stream data segment; or, in other embodiments of the present application, One code stream data corresponds to one code stream data segment. Therefore, the code stream data segments for reading the first code stream data and the second code stream data are different from the code stream data segments for reading the third code stream data and the fourth code stream data.
  • Step 507 If the kth frame is the last frame of the picture file in the dynamic format, decoding the picture file in the dynamic format is completed.
  • the decoding apparatus determines that the kth frame is the last frame of the picture file of the dynamic format, it indicates that the decoding of the picture file of the dynamic format is completed.
  • the decoding apparatus may parse the picture file to obtain picture header information and frame header information of the picture file in the dynamic format, so that the picture header information may be used to determine whether the picture file includes transparency data, and then It can be determined whether only the first code stream data generated by the RGB data is acquired in the decoding process, or the first code stream data generated from the RGB data and the second code stream data generated from the transparency data are acquired.
  • the image corresponding to each frame in the dynamic format image file of the embodiment of the present application is RGBA data including RGB data and transparency data
  • the image corresponding to each frame in the dynamic format image file only includes RGB data.
  • the decoding means may perform step 502 on the first code stream data representing each frame image to generate RGB data. In this way, the code stream data containing only RGB data can still be decoded by the video decoding mode.
  • the decoding apparatus decodes the first code stream data in each frame image according to the first video decoding mode, and generates RGB data of the first image; decoding the second code stream data in each frame image according to the second video decoding mode to generate transparency data of the first image; generating the first according to the RGB data and the transparency data of the first image
  • the RGBA data corresponding to an image.
  • the RGBA data is obtained by decoding the first code stream data and the second code stream data in the picture file respectively, thereby realizing the use of the video codec mode while preserving the transparency data in the picture file, thereby ensuring the picture file. quality.
  • FIG. 11 is a schematic flowchart diagram of another method for processing a picture file according to an embodiment of the present disclosure, where the method may be performed by the foregoing computing device.
  • the computing device is a terminal device, and the method in the embodiment of the present application may include steps 601 to 606.
  • Step 601 Parse the picture file to obtain picture header information and frame header information of the picture file.
  • the decoding device running in the terminal device parses the picture file to obtain picture header information and frame header information of the picture file.
  • the image header information includes image feature information indicating whether the image file has transparency data, and determining whether to acquire the code stream data and whether the acquired code stream data includes the transparency data generated by determining whether the transparency data is included Two streams of data.
  • the frame header information is used to indicate a code stream data segment of the picture file, and the code stream data segment capable of acquiring the code stream data can be determined by using the frame header information, thereby implementing decoding of the code stream data.
  • the frame header information includes a frame header information start code, and the code stream data segment can be determined by identifying the frame header information start code.
  • the decoding device parsing the picture file to obtain the picture header information of the picture file may be: reading the picture header information of the picture file from the picture header information data segment of the picture file.
  • the decoding device parsing the picture file to obtain the frame header information of the picture file may be: reading the frame header information of the picture file from the frame header information data segment of the picture file.
  • picture header information and the frame header information in the embodiment of the present application may refer to the examples in FIG. 5a, FIG. 5b, FIG. 5c, FIG. 6a, FIG. 6b, FIG. 7a, FIG. 7b, FIG. 8a, FIG. 8b, and FIG. In this regard, we will not repeat them here.
  • Step 602 Read code stream data in a code stream data segment indicated by the frame header information in the picture file.
  • the decoding apparatus reads the code stream data in the code stream data segment indicated by the frame header information in the picture file.
  • the code stream data includes first code stream data and second code stream data.
  • a frame image of the picture file corresponds to one frame header information, that is, the frame header information may be used to indicate a code stream data segment including the first code stream data and the second code stream data.
  • the static format image file includes one frame image, that is, the first image. Therefore, the static format image file includes a frame header information.
  • the picture file of the dynamic format generally contains at least two frames of images, and one frame header information is provided for each of the frames. If it is determined that the picture file includes transparency data, the decoding apparatus reads the first code stream data and the second code stream data according to the code stream data segment indicated by the frame header information.
  • one code stream data in one frame of the picture file corresponds to one frame header information, that is, a code stream data segment indicated in one frame header information includes one code stream data.
  • the static format image file includes one frame image, that is, the first image, and the first image including the transparency data corresponds to two code stream data, which are respectively the first code stream data.
  • the second code stream data corresponds to another frame header information.
  • the dynamic format picture file contains at least two frames of images, and each frame image containing transparency data corresponds to two code stream data, which are the first code stream data and the second code stream data, respectively. And adding a frame header information to each of the first stream data and the second stream data of each frame image. Therefore, if it is determined that the picture file includes transparency data, the decoding apparatus acquires the first code stream data and the second code stream data respectively according to the two code stream data segments respectively indicated by the two frame header information.
  • the encoding apparatus may arrange the frame header information data segment corresponding to the first code stream data according to a preset order, A code stream data segment, a frame header information data segment corresponding to the second code stream data, and a second code stream data segment, and the decoding device may determine an arrangement order of the encoding devices. For example, for the first code stream data segment, the second code stream data segment, and the frame header information data segment corresponding to each code stream data of one frame image, the frame header information data segment corresponding to the first code stream data may be used.
  • the first stream data segment, the frame header information data segment corresponding to the second code stream data, and the second code stream data segment are arranged, so that in the process of decoding by the decoding device, two frame header information indicating the frame image can be determined. And which of the code stream data segments indicated by the two frame headers can acquire the first code stream data, and which one can acquire the second code stream data.
  • the first code stream data herein refers to code stream data generated by RGB data
  • the second code stream data refers to code stream data generated by transparency data.
  • Step 603 Decode the first code stream data according to the first video decoding mode to generate RGB data of the first image.
  • Step 604 Decode the second code stream data according to the second video decoding mode to generate transparency data of the first image.
  • Step 605 Generate RGBA data corresponding to the first image according to the RGB data of the first image and the transparency data.
  • the decoding apparatus parses the picture file, obtains picture header information and frame header information of the picture file, and reads the code indicated by the frame header information in the picture file.
  • Stream data in the stream data segment decoding the first code stream data in each frame image according to the first video decoding mode to generate RGB data of the first image; representing each frame according to the second video decoding mode pair
  • the second code stream data in the image is decoded to generate transparency data of the first image; and the RGBA data corresponding to the first image is generated according to the RGB data and the transparency data of the first image.
  • the RGBA data is obtained by decoding the first code stream data and the second code stream data in the picture file respectively, thereby realizing the use of the video codec mode while preserving the transparency data in the picture file, thereby ensuring the picture file. quality.
  • FIG. 12 is a schematic flowchart diagram of another method for processing a picture file according to an embodiment of the present disclosure, which may be performed by the foregoing computing device. As shown in FIG. 12, it is assumed that the computing device is a terminal device, and the method in the embodiment of the present application may include steps 701 to 705.
  • Step 701 Generate picture header information and frame header information corresponding to the picture file.
  • the picture file processing apparatus running in the terminal device generates picture header information and frame header information corresponding to the picture file.
  • the image file may be a static format image file, that is, only the first image is included; or the image file is a dynamic format image file, that is, the first image and other images are included. Regardless of whether the picture file is a static format picture file or a dynamic format picture file, the picture file processing apparatus needs to generate picture header information corresponding to the picture file.
  • the picture header information includes image feature information indicating whether the picture file has transparency data, so that the decoding device determines, by using the image feature information, whether the picture file includes transparency data, determines how to obtain code stream data, and obtains Whether the code stream data contains the second stream data generated by the transparency data.
  • the frame header information is used to indicate a code stream data segment of the picture file, so that the decoding device determines, by using the frame header information, a code stream data segment that can acquire the code stream data, thereby implementing decoding of the code stream data.
  • the frame header information includes a frame header information start code, and the code stream data segment can be determined by identifying the frame header information start code.
  • Step 702 Write the picture header information into a picture header information data segment of the picture file.
  • the picture file processing apparatus writes the picture header information into the picture file picture header information data segment.
  • Step 703 Write the frame header information into a header information data segment of the picture file.
  • the picture file processing apparatus writes the frame header information into a header information data segment of the picture file.
  • Step 704 If it is determined that the image file includes transparency data according to the image feature information included in the picture header information, the RGB data included in the RGBA data corresponding to the first image is encoded according to the first video coding mode to generate the first The code stream data, and the transparency data included in the RGBA data corresponding to the first image are encoded according to the second video coding mode to generate second code stream data.
  • the picture file processing apparatus encodes the RGB data included in the RGBA data corresponding to the first image according to the first video coding mode. a code stream data, and encoding the transparency data included in the RGBA data corresponding to the first image according to the second video coding mode to generate second code stream data.
  • the picture file processing apparatus acquires RGBA data corresponding to the first image in the picture file
  • the RGBA data is separated to obtain RGB data and transparency of the first image.
  • the RGB data is color data included in the RGBA data
  • the transparency data is transparency data included in the RGBA data.
  • the specific coding process may be specifically described in the embodiment shown in FIG. 1 to FIG. 4d, and details are not described herein again.
  • Step 705 Write the first code stream data and the second code stream data into a code stream data segment indicated by the frame header information corresponding to the first image.
  • the picture file processing apparatus writes the first code stream data and the second code stream data into a code stream data segment indicated by the frame header information corresponding to the first image.
  • picture header information and the frame header information in the embodiment of the present application may refer to the examples in FIG. 5a, FIG. 5b, FIG. 5c, FIG. 6a, FIG. 6b, FIG. 7a, FIG. 7b, FIG. 8a, FIG. 8b, and FIG. In this regard, we will not repeat them here.
  • the RGBA data input before encoding in the embodiment of the present application may be obtained by decoding image files of various formats, where the format of the image file may be JPEG, BMP, PNG, APNG, GIF, etc.
  • the format of the picture file before encoding is not limited.
  • the picture file processing apparatus generates the picture header information and the frame header information corresponding to the picture file, and the image feature information included in the picture header information indicating whether the picture file has transparency data enables the decoding apparatus to determine how to obtain the code. Whether the stream data and the acquired code stream data include the second code stream data generated by the transparency data; the code stream data segment of the picture file indicated by the frame header information enables the decoding device to acquire the code stream in the code stream data segment Data, in turn, to decode the code stream data.
  • FIG. 13 is a schematic flowchart diagram of another method for processing a picture file according to an embodiment of the present disclosure, where the method may be performed by the foregoing computing device.
  • the computing device is a terminal device, and the method in the embodiment of the present application may include steps 801 to 803.
  • Step 801 Parse the picture file to obtain picture header information and frame header information of the picture file.
  • the picture file processing apparatus running in the terminal device parses the picture file to obtain picture header information and frame header information of the picture file.
  • the picture header information includes image feature information indicating whether the picture file has transparency data, and determining whether the picture file includes transparency data can determine how to obtain the code stream data and whether the obtained code stream data includes transparency.
  • the second code stream data generated by the data.
  • the frame header information is used to indicate a code stream data segment of the picture file, and the code stream data segment that can obtain the code stream data can be determined by using the frame header information, thereby implementing decoding of the code stream data.
  • the frame header information includes a frame header information start code, and the code stream data segment can be determined by identifying the frame header information start code.
  • the picture file processing apparatus parses the picture file to obtain the picture header information of the picture file, which may be: reading the picture header information of the picture file from the picture header information data segment of the picture file. .
  • the picture file processing apparatus parses the picture file to obtain the frame header information of the picture file, which may be: reading the frame header information of the picture file from the frame header information data segment of the picture file. .
  • picture header information and the frame header information in the embodiment of the present application may refer to the examples in FIG. 5a, FIG. 5b, FIG. 5c, FIG. 6a, FIG. 6b, FIG. 7a, FIG. 7b, FIG. 8a, FIG. 8b, and FIG. In this regard, we will not repeat them here.
  • Step 802 If it is determined by the image feature information that the picture file includes transparency data, read code stream data in a code stream data segment indicated by the frame header information in the picture file, where the code stream data includes The first code stream data and the second code stream data.
  • the picture file processing apparatus reads the code stream data in the code stream data segment indicated by the frame header information in the picture file.
  • the code stream data includes first code stream data and second code stream data.
  • one frame image of the picture file corresponds to one frame header information, that is, the frame header information may be used to indicate a code stream data segment including the first code stream data and the second code stream data.
  • the static format image file includes one frame image, that is, the first image. Therefore, the static format image file includes a frame header information.
  • the picture file of the dynamic format generally includes at least two frames of images, and one frame header information is added for each of the frames. If it is determined that the picture file includes transparency data, the picture file processing apparatus reads the first code stream data and the second code stream data according to the code stream data segment indicated by the frame header information.
  • one code stream data in one frame image of the picture file corresponds to one frame header information, that is, a code stream data segment indicated in one frame header information includes one code stream data.
  • the static format image file includes one frame image, that is, the first image, and the first image including the transparency data corresponds to two code stream data, which are respectively the first code stream data.
  • the second code stream data corresponds to another frame header information.
  • the dynamic format picture file contains at least two frames of images, and each frame image containing transparency data corresponds to two code stream data, which are the first code stream data and the second code stream data, respectively. And adding a frame header information to each of the first stream data and the second stream data of each frame image. Therefore, if it is determined that the picture file includes transparency data, the picture file processing apparatus acquires the first code stream data and the second code stream data respectively according to the two code stream data segments respectively indicated by the two frame header information.
  • the encoding apparatus may arrange the frame header information data segment corresponding to the first code stream data according to a preset order, a code stream data segment, a frame header information data segment corresponding to the second code stream data, and a second code stream data segment, and the picture file processing device can determine an arrangement order of the encoding devices. For example, for the first code stream data segment, the second code stream data segment, and the frame header information data segment corresponding to each code stream data of one frame image, the frame header information data segment corresponding to the first code stream data may be used.
  • the first code stream data segment, the frame header information data segment corresponding to the second code stream data, and the second code stream data segment are arranged, so that two frames representing the frame image can be determined during decoding of the picture file processing device. Which of the code stream data segments indicated by the header information and the two frame headers can acquire the first code stream data, and which one can acquire the second code stream data.
  • the first code stream data herein refers to code stream data generated by RGB data
  • the second code stream data refers to code stream data generated by transparency data.
  • Step 803 Decode the first code stream data and the second code stream data respectively.
  • the picture file processing apparatus After the picture file processing apparatus acquires the first code stream data and the second code stream data from the code stream data segment, the picture file processing apparatus separately determines the first code stream data and the second code stream data respectively. Decode.
  • the picture file processing apparatus may implement decoding of the first code stream data and the second code stream data by referring to the execution process of the decoding apparatus in the embodiment shown in FIG. 9 to FIG.
  • the picture file processing apparatus parses the picture file to obtain the picture header information and the frame header information, and can determine how to obtain the code stream by using the image feature information included in the picture header information indicating whether the picture file has transparency data. Whether the data and the acquired code stream data include the second code stream data generated by the transparency data; obtaining the code stream data in the code stream data segment by using the code stream data segment of the picture file indicated by the frame header information, thereby implementing Decoding of code stream data.
  • FIG. 14 is a schematic structural diagram of an encoding apparatus according to an embodiment of the present application.
  • the encoding apparatus 1 of the embodiment of the present application may include: a data acquiring module 11, a first encoding module 12, a second encoding module 13, and a data writing module 14.
  • the data obtaining module 11 is configured to acquire RGBA data corresponding to the first image in the picture file, and separate the RGBA data to obtain RGB data and transparency data of the first image, where the RGB data is the RGBA The color data included in the data, the transparency data being transparency data included in the RGBA data;
  • the first encoding module 12 is configured to encode the RGB data of the first image according to the first video encoding mode to generate the first code stream data;
  • the second encoding module 13 is configured to encode the transparency data of the first image according to the second video encoding mode to generate second code stream data.
  • the data writing module 14 is configured to write the first code stream data and the second code stream data into a code stream data segment of the picture file, where the first image is included in the picture file image.
  • the first encoding module 12 includes a first data conversion unit 121 and a first code stream generating unit 122, where:
  • a first data conversion unit 121 configured to convert RGB data of the first image into first YUV data
  • the first code stream generating unit 122 is configured to encode the first YUV data according to the first video coding mode to generate first code stream data.
  • the second encoding module 13 includes a second data conversion unit 131 and a second code stream generating unit 132, where:
  • a second data conversion unit 131 configured to convert transparency data of the first image into second YUV data
  • the second code stream generating unit 132 is configured to encode the second YUV data according to the second video coding mode to generate second code stream data.
  • the second data conversion unit 131 is configured to set the transparency data of the first image as the Y component in the second YUV data, and not set the second YUV data. U and V components.
  • the second data conversion unit 131 is configured to set the transparency data of the first image as the Y component in the second YUV data, and set the U component and the V component in the second YUV data. For preset data.
  • the data obtaining module 11 is configured to determine, if the picture file is a picture file in a dynamic format, and the first image is an image corresponding to a kth frame in the picture file, Whether the kth frame is the last frame in the picture file, where k is a positive integer greater than 0; if the kth frame is not the last frame in the picture file, acquiring the picture file The RGBA data corresponding to the second image corresponding to the (k+1)th frame, and separating the RGBA data corresponding to the second image to obtain RGB data and transparency data of the second image;
  • the first encoding module 12 is further configured to encode the RGB data of the second image according to a third video encoding mode to generate third stream data.
  • the second encoding module 13 is further configured to encode the transparency data of the second image according to the fourth video encoding mode to generate fourth code stream data;
  • the data writing module 14 is further configured to write the third code stream data and the fourth code stream data into a code stream data segment of the picture file.
  • the encoding apparatus 1 further includes:
  • the information generating module 15 is configured to generate image header information and frame header information corresponding to the image file, where the image header information includes image feature information indicating whether the image file has transparency data, and the frame header information is used by the image header information. And indicating a code stream data segment of the picture file.
  • the data writing module 13 is further configured to write the picture header information generated by the information generating module 15 into a picture header information data segment of the picture file.
  • the data writing module 13 is further configured to write the frame header information generated by the information generating module 15 into a header information data segment of the picture file.
  • modules, the units, and the beneficial effects performed by the encoding apparatus 1 described in the embodiments of the present application may be specifically implemented according to the method in the foregoing method embodiment shown in FIG. 1c to FIG. 8c. Narration.
  • FIG. 15 is a schematic structural diagram of another encoding apparatus according to an embodiment of the present application.
  • the encoding apparatus 1000 may include at least one processor 1001, such as a CPU, at least one network interface 1004, a memory 1005, and at least one communication bus 1002.
  • the network interface 1004 can include a standard wired interface, a wireless interface (such as a WI-FI interface).
  • the memory 1005 may be a high speed RAM memory or a non-volatile memory such as at least one disk memory.
  • the memory 1005 may also be at least one storage device located remotely from the aforementioned processor 1001.
  • the communication bus 1002 is used to implement connection communication between these components.
  • the encoding device 1000 includes a user interface 1003, wherein the user interface 1003 may include a display 10031 and a keyboard 10032.
  • the user interface 1003 may include a display 10031 and a keyboard 10032.
  • an operating system 10051, a network communication module 10052, a user interface module 10053, and machine readable instructions 10054 may be included in a memory 1005 as a computer readable storage medium, including coded in the machine readable instructions 10054.
  • the processor 1001 can be used to call the encoding application 10055 stored in the memory 1005, and specifically performs the following operations:
  • RGBA data corresponding to the first image in the picture file, and separating the RGBA data to obtain RGB data and transparency data of the first image, where the RGB data is color data included in the RGBA data, the transparency Data is transparency data contained in the RGBA data;
  • the first code stream data and the second code stream data are written into a code stream data segment of the picture file.
  • the processor 1001 when the processor 1001 performs encoding on the RGB data of the first image according to the first video coding mode to generate the first code stream data, the processor 1001 performs:
  • the processor 1001 when the processor 1001 performs the encoding of the transparency data of the first image according to the second video coding mode to generate the second code stream data, the processor 1001 performs:
  • the processor 1001 when the processor 1001 performs the conversion of the transparency data of the first image into the second YUV data, the processor 1001 performs:
  • the transparency data of the first image is set as the Y component in the second YUV data, and the U component and the V component in the second YUV data are set as preset data.
  • the processor 1001 also performs the following steps:
  • the picture file is a picture file in a dynamic format
  • the first image is an image corresponding to the kth frame in the picture file
  • determining whether the kth frame is the last frame in the picture file Where k is a positive integer greater than 0; if the kth frame is not the last frame in the picture file, obtaining the RGBA corresponding to the second image corresponding to the (k+1)th frame in the picture file Data, and separating RGBA data corresponding to the second image to obtain RGB data and transparency data of the second image;
  • the third code stream data and the fourth code stream data are written into a code stream data segment of the picture file.
  • the processor 1001 also performs the following steps:
  • image header information includes image feature information indicating whether the image file has transparency data
  • frame header information is used to indicate the image file.
  • the processor 1001 also performs the following steps:
  • the picture header information is written into a picture header information data segment of the picture file.
  • the processor 1001 also performs the following steps:
  • the header information is written into a header information data segment of the picture file.
  • FIG. 16 is a schematic structural diagram of a decoding apparatus according to an embodiment of the present application.
  • the decoding apparatus 2 of the embodiment of the present application may include: a first data acquiring module 26, a first decoding module 21, a second decoding module 22, and a data generating module 23.
  • the first code stream data and the second code stream data in the embodiment of the present application are data generated by the first image read from a code stream data segment of a picture file.
  • the first data obtaining module 26 is configured to obtain, from the code stream data segment of the picture file, the first code stream data and the second code stream data generated by the first image in the picture file;
  • the first decoding module 21 is configured to decode the first code stream data according to the first video decoding mode to generate RGB data of the first image;
  • a second decoding module 22 configured to decode the second code stream data according to the second video decoding mode, to generate transparency data of the first image
  • the data generating module 23 is configured to generate RGBA data corresponding to the first image according to the RGB data of the first image and the transparency data.
  • the first decoding module 21 includes a first data generating unit 211 and a first data converting unit 212, where:
  • the first data generating unit 211 is configured to decode the first code stream data according to the first video decoding mode to generate first YUV data of the first image;
  • the first data conversion unit 212 is configured to convert the first YUV data into RGB data of the first image.
  • the second decoding module 22 includes a second data generating unit 221 and a second data converting unit 222, where:
  • a second data generating unit 221, configured to decode the second code stream data according to a second video decoding mode, to generate second YUV data of the first image
  • the second data conversion unit 222 is configured to convert the second YUV data into transparency data of the first image.
  • the second data conversion unit 222 is specifically configured to set a Y component in the second YUV data as the transparency data of the first image, and discard the second U and V components in YUV data.
  • the decoding apparatus 2 further includes:
  • the second data obtaining module 24 is configured to determine the kth frame if the picture file is a picture file in a dynamic format and the first image is an image corresponding to a kth frame in the picture file in the dynamic format. Whether it is the last frame in the picture file, where k is a positive integer greater than 0; if the kth frame is not the last frame in the picture file, from the code stream data segment of the picture file Obtaining third code stream data and fourth code stream data generated by the second image corresponding to the (k+1)th frame in the picture file;
  • the first decoding module 21 is further configured to: decode the third code stream data according to a third video decoding mode, to generate RGB data of the second image;
  • the second decoding module 22 is further configured to: decode the fourth code stream data according to a fourth video decoding mode, to generate transparency data of the second image;
  • the data generating module 23 is further configured to generate RGBA data corresponding to the second image according to the RGB data of the second image and the transparency data.
  • the decoding device 2 further includes a file parsing module 25:
  • the file parsing module 25 is configured to parse a picture file to obtain picture header information and frame header information of the picture file, where the picture header information includes image feature information indicating whether the picture file has transparency data, the frame The header information is used to indicate a stream data segment of the picture file.
  • the file parsing module 25 is specifically configured to read the picture header information of the picture file from the picture header information data segment of the picture file.
  • the file parsing module 25 is specifically configured to read the header information of the picture file from the frame header information data segment of the picture file.
  • the first data acquiring module 26 is configured to: if it is determined by the image feature information that the image file includes transparency data, read a code stream in a code stream data segment indicated by the frame header information in the image file. Data, the code stream data includes first code stream data and second code stream data.
  • modules, the units, and the beneficial effects performed by the decoding apparatus 2 described in the embodiments of the present application may be specifically implemented according to the method in the foregoing method embodiment shown in FIG. 9 to FIG. Narration.
  • FIG. 17 is a schematic structural diagram of another decoding apparatus according to an embodiment of the present application.
  • the decoding device 2000 may include at least one processor 2001, such as a CPU, at least one network interface 2004, a memory 2005, and at least one communication bus 2002.
  • Network interface 2004 may include a standard wired interface, a wireless interface (such as a WI-FI interface).
  • the memory 2005 may be a high speed RAM memory or a non-volatile memory such as at least one disk memory.
  • the memory 2005 can also be at least one storage device located remotely from the aforementioned processor 2001.
  • the communication bus 2002 is used to implement connection communication between these components.
  • the decoding device 2000 includes a user interface 2003, wherein the user interface 2003 may include a display 20031, a keyboard 20032.
  • a memory 2005 as a computer readable storage medium may include an operating system 20051, a network communication module 20052, a user interface module 20053, and a machine readable instruction 20054, the machine readable instructions 20054 including a decoding application.
  • the processor 2001 can be used to call the decoding application 20055 stored in the memory 2005, and specifically performs the following operations:
  • the first code stream data and the second code stream data are data generated by the first image read from a code stream data segment of a picture file.
  • the processor 2001 when the processor 2001 performs decoding on the first code stream data according to the first video decoding mode to generate RGB data of the first image, specifically:
  • the processor 2001 when the processor 2001 performs decoding on the second code stream data according to the second video decoding mode to generate the transparency data of the first image, specifically:
  • the processor 2001 performs: when performing the conversion of the second YUV data into the transparency data of the first image, specifically:
  • the Y component in the second YUV data is set to the transparency data of the first image, and the U component and the V component in the second YUV data are discarded.
  • the processor 2001 also performs the following steps:
  • the picture file is a picture file in a dynamic format and the first image is an image corresponding to the kth frame in the picture file of the dynamic format, determining whether the kth frame is the last in the picture file a frame, wherein k is a positive integer greater than 0; if the kth frame is not the last frame in the picture file, obtaining from the code stream data segment of the picture file by the picture file ( k+1) third code stream data and fourth code stream data generated by the second image corresponding to the frame;
  • the processor 2001 performs the following steps before performing decoding on the first code stream data according to the first video decoding mode to generate RGB data of the first image:
  • picture header information includes image feature information indicating whether the picture file has transparency data
  • frame header information is used to indicate the picture file.
  • the processor 2001 when the processor 2001 performs the parsing of the picture file to obtain the picture header information of the picture file, the processor 2001 performs:
  • the picture header information of the picture file is read from the picture header information data segment of the picture file.
  • the processor 2001 when the processor 2001 performs the parsing of the picture file to obtain the frame header information of the picture file, the processor specifically executes:
  • the header information of the picture file is read from a frame header information data segment of the picture file.
  • the processor 2001 further performs the following steps: if it is determined by the image feature information that the picture file includes transparency data, reading code stream data indicated by the frame header information in the picture file The code stream data in the segment, the code stream data including the first code stream data and the second code stream data.
  • FIG. 18 is a schematic structural diagram of a picture file processing apparatus according to an embodiment of the present application.
  • the picture file processing apparatus 3 of the embodiment of the present application may include an information generating module 31.
  • the picture file processing apparatus 3 may further include at least one of the first information writing module 32, the second information writing module 33, the data encoding module 34, and the data writing module 35.
  • the information generating module 31 is configured to generate picture header information and frame header information corresponding to the picture file, where the picture header information includes image feature information indicating whether the picture file has transparency data, and the frame header information is used to indicate the The stream data segment of the image file.
  • the picture file processing apparatus 3 further includes:
  • the first information writing module 32 is configured to write the picture header information into the picture header information data segment of the picture file.
  • the picture file processing apparatus 3 further includes a second information writing module 33:
  • the second information writing module 33 is configured to write the frame header information into a header information data segment of the picture file.
  • the picture file processing apparatus 3 further includes a data encoding module 34 and a data writing module 35:
  • the data encoding module 34 if it is determined that the image file includes transparency data according to the image feature information, encodes RGB data included in the RGBA data corresponding to the first image included in the image file to generate a first code.
  • the stream data, and the included transparency data are encoded to generate second stream data;
  • the data writing module 35 writes the first code stream data and the second code stream data into a code stream data segment indicated by the header information corresponding to the first image.
  • FIG. 19 is a schematic structural diagram of another picture file processing apparatus according to an embodiment of the present application.
  • the picture file processing apparatus 3000 may include at least one processor 3001, such as a CPU, at least one network interface 3004, a memory 3005, and at least one communication bus 3002.
  • Network interface 3004 may include a standard wired interface, a wireless interface (such as a WI-FI interface).
  • the memory 3005 may be a high speed RAM memory or a non-volatile memory such as at least one disk memory.
  • the memory 3005 may also be at least one storage device located remotely from the aforementioned processor 3001.
  • the communication bus 3002 is used to implement connection communication between these components.
  • the picture file processing apparatus 3000 includes a user interface 3003, wherein the user interface 3003 may include a display 30031 and a keyboard 30032.
  • the user interface 3003 may include a display 30031 and a keyboard 30032.
  • an operating system 30051, a network communication module 30052, a user interface module 30053, and machine readable instructions 30054 may be included in a memory 3005 as a computer readable storage medium, the machine readable instructions 30054 including picture files Process application 30055.
  • the processor 3001 may be configured to call the picture file processing application 30055 stored in the memory 3005, and specifically perform the following operations:
  • the picture header information includes image feature information indicating whether the picture file has transparency data
  • the frame header information is used to indicate a code stream data segment of the picture file.
  • the processor 3001 also performs the following steps:
  • the picture header information is written into a picture header information data segment of the picture file.
  • the processor 3001 also performs the following steps:
  • the header information is written into a header information data segment of the picture file.
  • the processor 3001 also performs the following steps:
  • the image file includes the transparency data according to the image feature information, encoding, by using the RGB data included in the RGBA data corresponding to the first image included in the image file, the first code stream data, and the transparency included Degree data is encoded to generate second code stream data;
  • FIG. 20 is a schematic structural diagram of a picture file processing apparatus according to an embodiment of the present application.
  • the picture file processing apparatus 4 of the embodiment of the present application may include a file parsing module 41.
  • the picture file processing apparatus 4 may further include at least one of a data reading module 42 and a data decoding module 43.
  • the file parsing module 41 is configured to parse the picture file to obtain picture header information and frame header information of the picture file, where the picture header information includes image feature information indicating whether the picture file has transparency data, and the frame header information A code stream data segment for indicating the picture file.
  • the file parsing module 41 is specifically configured to read the picture header information of the picture file from the picture header information data segment of the picture file.
  • the file parsing module 41 is specifically configured to read frame header information of the picture file from a frame header information data segment of the picture file.
  • the picture file processing apparatus 4 further includes a data reading module 42 and a data decoding module 43, wherein:
  • the data reading module 42 is configured to: if the image file includes the transparency data by using the image feature information, read the code stream data in the code stream data segment indicated by the frame header information in the image file.
  • the code stream data includes first code stream data and second code stream data.
  • the data decoding module 43 is configured to separately decode the first code stream data and the second code stream data.
  • modules and the beneficial effects of the image file processing apparatus 4 described in the embodiment of the present invention may be specifically implemented according to the method in the foregoing method embodiment shown in FIG. 13 , and details are not described herein again.
  • FIG. 21 is a schematic structural diagram of another picture file processing apparatus according to an embodiment of the present application.
  • the picture file processing apparatus 4000 may include at least one processor 4001, such as a CPU, at least one network interface 4004, a memory 4005, and at least one communication bus 4002.
  • the network interface 4004 can include a standard wired interface, a wireless interface (such as a WI-FI interface).
  • the memory 4005 may be a high speed RAM memory or a non-volatile memory such as at least one disk memory.
  • the memory 4005 may also be at least one storage device located remotely from the aforementioned processor 4001. Among them, the communication bus 4002 is used to implement connection communication between these components.
  • the picture file processing apparatus 4000 includes a user interface 4003, wherein the user interface 4003 may include a display 40031 and a keyboard 40032.
  • the user interface 4003 may include a display 40031 and a keyboard 40032.
  • an operating system 40051, a network communication module 40052, a user interface module 40053, and machine readable instructions 40054 may be included in a memory 4005 as a computer readable storage medium, the machine readable instructions 40054 including pictures File Processing Application 40055.
  • the processor 4001 can be used to call the picture file processing application 40055 stored in the memory 4005, and specifically performs the following operations:
  • picture header information includes image feature information indicating whether the picture file has transparency data
  • frame header information is used to indicate the picture file.
  • the processor 4001 when the processor 4001 performs the parsing of the picture file to obtain the picture header information of the picture file, the processor 4001 performs:
  • the picture header information of the picture file is read from the picture header information data segment of the picture file.
  • the processor 4001 when the processor 4001 performs the parsing of the picture file to obtain the frame header information of the picture file, the processor 4001 performs:
  • the header information of the picture file is read from a frame header information data segment of the picture file.
  • the processor 4001 also performs the following steps:
  • the picture file includes transparency data
  • FIG. 22 is a system architecture diagram of a picture file processing system according to an embodiment of the present application. As shown in FIG. 22, the picture file processing system 5000 includes an encoding device 5001 and a decoding device 5002.
  • the encoding device 5001 may be the encoding device shown in FIGS. 1c to 8c, or may also include a terminal device having an encoding module that implements the functions of the encoding device shown in FIGS. 1c to 8c;
  • the decoding device 5002 may be the decoding device shown in FIGS. 9 to 11, or may include a terminal device having a decoding module that implements the decoding device functions illustrated in FIGS. 9 to 11.
  • the encoding device 5001 may be the picture file processing device shown in FIG. 12, or may also include a picture file processing module having the function of implementing the picture file processing device shown in FIG. 12; correspondingly, decoding
  • the device 5002 may be the picture file processing device shown in FIG. 13, or may also include a picture file processing module having the picture file processing device implemented in FIG.
  • the encoding device, the decoding device, the picture file processing device, and the terminal device involved in the embodiments of the present application may include a tablet computer, a mobile phone, an e-reader, a personal computer (PC), a notebook computer, an in-vehicle device, a network television, and a A device such as a wearable device is not limited in this embodiment of the present application.
  • the encoding device 5001 and the decoding device 5002 related to the embodiment of the present application are specifically introduced in conjunction with FIG. 23 and FIG. 23 and FIG. 24 are a more complete view of other aspects that may be involved in the above-described method from the perspective of functional logic, to facilitate the reader to further understand the technical solutions described in the present application.
  • FIG. 23 an exemplary diagram of an encoding module provided by an embodiment of the present application is shown.
  • the encoding device 5001 may include the encoding module 6000 shown in FIG. 23, and the encoding module 6000 may include: an RGB data and transparency data separating sub-module 6001, a first video encoding mode sub-module 6002, and a second video encoding mode sub-module 6003.
  • the RGB data and transparency data separation sub-module 6001 is configured to separate RGBA data in the picture source format into RGB data and transparency data.
  • the first video coding mode sub-module 6002 is configured to implement encoding of RGB data to generate first code stream data.
  • the second video coding mode sub-module 6003 is configured to implement encoding of the transparency data to generate second code stream data.
  • the picture header information, frame header information encapsulation sub-module 6004 is configured to generate picture header information and frame header information of the code stream data including the first code stream data and the second code stream data to output compressed image data.
  • the encoding module 6000 receives the input RGBA data of the image file, and divides the RGBA data into RGB data and transparency data by using the RGB data and transparency data separating sub-module 6001;
  • the first video coding mode sub-module 6002 encodes the RGB data according to the first video coding mode to generate the first code stream data; and then, the second video coding mode sub-module 6003 encodes the transparency data according to the second video coding mode.
  • the picture header information, the frame header information encapsulation sub-module 6004 generates picture header information and frame header information of the picture file, and the first code stream data, the second code stream data, and the frame header information
  • the picture header information is written into the corresponding data segment to generate compressed image data corresponding to the RGBA data.
  • the encoding module 6000 determines the number of frames included; then, the RGBA data of each frame is divided into RGB data and transparency data by the RGB data and transparency data separation sub-module 6001, the first video.
  • the encoding mode sub-module 6002 encodes the RGB data according to the first video encoding mode to generate the first code stream data
  • the second video encoding mode sub-module 6003 encodes the transparency data according to the second video encoding mode to generate the second code stream.
  • Data, picture header information, and frame header information encapsulation sub-module 6004 generate frame header information corresponding to each frame, and write each code stream data and frame header information into corresponding data segments; finally, picture header information and frame header information encapsulation
  • the module 6004 generates picture header information of the picture file, and writes the picture header information into the corresponding data segment, thereby generating compressed image data corresponding to the RGBA data.
  • the compressed image data may also be described by using a name such as a compressed code stream, an image sequence, or the like, which is not limited by the embodiment of the present application.
  • FIG. 24 is a schematic diagram of a decoding module provided by an embodiment of the present application.
  • the decoding device 5002 may include the decoding module 7000 shown in FIG. 24, and the decoding module 7000 may include: picture header information, a header information parsing sub-module 7001, a first video decoding mode sub-module 7002, and a second video decoding mode.
  • Sub-module 7003 and RGB data and transparency data merge sub-module 7004.
  • the picture header information and the header information parsing sub-module 7001 are configured to parse the compressed image data of the picture file to determine picture header information and frame header information, where the compressed image data is encoded by using the coding module shown in FIG. The data obtained afterwards.
  • the first video decoding mode sub-module 7002 is configured to implement decoding of the first code stream data, wherein the first code stream data is generated by RGB data.
  • the second video decoding mode sub-module 7003 is configured to implement decoding of the second code stream data, wherein the second code stream data is generated by the transparency data.
  • the RGB data and transparency data merging sub-module 7004 is for combining RGB data and transparency data into RGBA data to output RGBA data.
  • the decoding module 7000 parses the compressed image data of the picture file by using the picture header information and the header information parsing sub-module 7001 to obtain picture header information and frame header information of the picture file. If it is determined that the image file has transparency data according to the picture header information, the first code stream data and the second code stream data are obtained from the code stream data segment indicated by the frame header information; then, the first video decoding mode sub-module 7002 follows the first video.
  • the decoding mode decodes the first code stream data to generate RGB data; then, the second video decoding mode sub-module 7003 decodes the second code stream data according to the second video decoding mode to generate transparency data; finally, the RGB data and The transparency data merging sub-module 7004 combines the RGB data and the transparency data to generate RGBA data and outputs the RGBA data.
  • the decoding module 7000 parses the compressed image data of the picture file by using the picture header information and the header information parsing sub-module 7001 to obtain the picture header information and the frame header information of the picture file, and determines that the picture file includes Then, if it is determined according to the picture header information that the picture file has transparency data, the first code stream data and the second code stream data are obtained from the code stream data segment indicated by the frame header information of each frame image, the first video The decoding mode sub-module 7002 decodes the first code stream data corresponding to each frame image according to the first video decoding mode to generate RGB data, and the second video decoding mode sub-module 7003 processes each frame image according to the second video decoding mode.
  • Corresponding second code stream data is decoded to generate transparency data.
  • the RGB data and transparency data combining sub-module 7004 combines the RGB data and the transparency data of each frame image to generate RGBA data, and the compressed image data is included. RGBA data output for all frames.
  • the encoding device 5001 can encode the picture file of the source format according to the encoding module shown in FIG. 23 and generate compressed image data, and transmit the compressed image data after encoding.
  • the decoding device 5002 After receiving the compressed image data, the decoding device 5002 performs decoding according to the decoding module shown in FIG. 24 to obtain RGBA data corresponding to the picture file.
  • the image file of the source format may include, but is not limited to, jpeg, png, gif, and the like.
  • FIG. 25 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
  • the terminal device 8000 includes an encoding module and a decoding module.
  • the encoding module may be an encoding module having the functions of the encoding device shown in FIG. 1c to FIG. 8c; correspondingly, the decoding module may have the decoding device implemented in FIG. 9 to FIG. Functional decoding module.
  • the encoding module may implement encoding according to the encoding module 6000 described in FIG. 23, and the decoding module may implement decoding according to the decoding module 7000 shown in FIG.
  • a picture file of a source format such as jpeg, png, gif, etc.
  • a terminal device can encode a picture file of a new format, so that the compression ratio of the picture file can be improved and the picture can be reduced by using video coding mode coding.
  • the size of the file can improve the image loading speed, save network transmission bandwidth and storage cost.
  • the video encoding mode is retained while the image file is retained.
  • the transparency data guarantees the quality of the image file.
  • the terminal device can also decode the picture file of the new format to obtain the corresponding RGBA data, and realize the decoding of the RGB data and the transparency data by using the video decoding mode to ensure the quality of the picture file.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Abstract

本申请实施例公开了一种图片文件处理方法、装置及存储介质,其中方法包括:从图片文件的码流数据段中获取由所述图片文件中的第一图像生成的第一码流数据和第二码流数据;按照第一视频解码模式对第一码流数据进行解码,生成第一图像的RGB数据;按照第二视频解码模式对第二码流数据进行解码,生成所述第一图像的透明度数据;根据所述第一图像的所述RGB数据和所述透明度数据,生成所述第一图像对应的RGBA数据。

Description

一种图片文件处理方法、装置及存储介质
本申请要求于2017年4月8日提交中国专利局、申请号为201710225913.7,申请名称为“一种图片文件处理方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,尤其涉及一种图片文件处理方法、装置及存储介质。
背景技术
随着移动互联网的发展,终端设备的下载流量大幅增长。用户下载流量中,图片文件流量占据很大比例。大量的图片文件也给网络传输带宽负载带来了很大的压力。如果能将图片文件大小减小,不但能提升加载速度,还能节省大量带宽以及存储成本。
发明内容
本申请实施例提供了一种图片文件处理方法、装置及存储介质,通过对第一码流数据和第二码流数据分别进行解码获得RGBA数据,实现了在采用视频编解码模式的同时保留了透明度数据,保证了图片文件的质量。
本申请实施例提供了一种图片文件处理方法,应用于一计算设备,包括:
从图片文件的码流数据段中获取由所述图片文件中的第一图像生成的第一码流数据和第二码流数据;
按照第一视频解码模式对所述第一码流数据进行解码,生成所述第一图像的RGB数据;
按照第二视频解码模式对所述第二码流数据进行解码,生成所述第一图像的透明度数据;
根据所述第一图像的RGB数据和透明度数据,生成所述第一图像对应的RGBA数据。
本申请实施例提供了一种图片文件处理装置,包括:
处理器以及与所述处理器连接的存储器,所述存储器中存储有可由所述处理器执行的机器可读指令,所述处理器执行所述机器可读指令完成以下操作:
从图片文件的码流数据段中获取由所述图片文件中的第一图像生成的第一码流数据和第二码流数据;
按照第一视频解码模式对所述第一码流数据进行解码,生成所述第一图像的RGB数据;
按照第二视频解码模式对所述第二码流数据进行解码,生成所述第一图像的透明度数据;
根据所述第一图像的RGB数据和透明度数据,生成所述第一图像对应的RGBA数据。
本申请实施例提供了一种非易失性计算机可读存储介质,所述存储介质中存储有机器可读指令,所述机器可读指令用于使处理器执行上述的方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对本申请实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1a是本申请实施例提供的一种图片文件处理方法的实施环境示意图;
图1b为本申请一个实施例中用于实现一种图片文件处理方法的计算设备的内部结构示意图;
图1c为本申请实施例提供的一种图片文件处理方法的流程示意图;
图2为本申请实施例提供的另一种图片文件处理方法的流程示意图;
图3为本申请实施例提供的动态格式的图片文件中包含的多帧图像的示例图;
图4a为本申请实施例提供的另一种图片文件处理方法的流程示意图;
图4b为本申请实施例提供的一种RGB数据转YUV数据的示例图;
图4c为本申请实施例提供的一种透明度数据转YUV数据的示例图;
图4d为本申请实施例提供的一种透明度数据转YUV数据的示例图;
图5a为本申请实施例提供的一种图片头信息的示例图;
图5b为本申请实施例提供的一种图像特征信息数据段的示例图;
图5c为本申请实施例提供的一种用户自定义信息数据段的示例图;
图6a为本申请实施例提供的一种静态格式的图片文件的封装示例图;
图6b为本申请实施例提供的一种动态格式的图片文件的封装示例图;
图7a为本申请实施例提供的另一种静态格式的图片文件的封装示例图;
图7b为本申请实施例提供的另一种动态格式的图片文件的封装示例图;
图8a为本申请实施例提供的一种帧头信息的示例图;
图8b为本申请实施例提供的一种图像帧头信息的示例图;
图8c为本申请实施例提供的一种透明通道帧头信息的示例图;
图9为本申请实施例提供的另一种图片文件处理方法的流程示意图;
图10为本申请实施例提供的另一种图片文件处理方法的流程示意图;
图11为本申请实施例提供的另一种图片文件处理方法的流程示意图;
图12为本申请实施例提供的另一种图片文件处理方法的流程示意图;
图13为本申请实施例提供的另一种图片文件处理方法的流程示意图;
图14a为本申请实施例提供的一种编码装置的结构示意图;
图14b为本申请实施例提供的一种编码装置的结构示意图;
图14c为本申请实施例提供的一种编码装置的结构示意图;
图14d为本申请实施例提供的一种编码装置的结构示意图;
图15为本申请实施例提供的另一种编码装置的结构示意图;
图16a为本申请实施例提供的一种解码装置的结构示意图;
图16b为本申请实施例提供的一种解码装置的结构示意图;
图16c为本申请实施例提供的一种解码装置的结构示意图;
图16d为本申请实施例提供的一种解码装置的结构示意图;
图16e为本申请实施例提供的一种解码装置的结构示意图;
图17为本申请实施例提供的另一种解码装置的结构示意图;
图18为本申请实施例提供的一种图片文件处理装置的结构示意图;
图19为本申请实施例提供的另一种图片文件处理装置的结构示意图;
图20为本申请实施例提供的另一种图片文件处理装置的结构示意图;
图21为本申请实施例提供的另一种图片文件处理装置的结构示意图;
图22为本申请实施例提供的一种图片文件处理系统的系统架构图;
图23为本申请实施例提供的一种编码模块的示例图;
图24为本申请实施例提供的一种解码模块的示例图;
图25为本申请实施例提供的一种终端设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
一般地,当需要传输大量的图片文件时,若想节省带宽或存储成本,一种方法是降低图片文件的质量,比如将jpeg格式的图片文件质量由jpeg80降低到jpeg70甚至更低,但是图片文件质量也大大下降,很影响用户体验;另一种方法是采用更高效的图片文件压缩方法,目前的主流图片文件格式主要是jpeg、png、gif等,在保证图片文件质量的前提下,它们都存在压缩效率不高的问题。
有鉴于此,本申请一些实施例提出了一种图片文件处理方法、装置及存储介质,能够通过视频编码模式分别对RGB数据和透明度数据进行编码,在提高图片文件压缩率的同时能够保证图片文件的质量。在本申请实施例中,在第一图像为RGBA数据的情况下,编码装置获取图片文件中第一图像对应的RGBA数据,并通过分离RGBA数据得到所述第一图像的RGB数据和透明度数据,按照第一视频编码模式对所述第一图像的RGB数据进行编码,生成第一码流数据;按照第二视频编码模式对第一图像的透明度数据进行编码,生成第二码流数据;将第一码流数据和第二码流数据写入码流数据段中。这样,通过采用视频编码模式编码能够提高图片文件的压缩率,减小图片文件的大小,因此可以提升图片加载速度,节省网络传输带宽以及存储成本;另外,通过对图片文件中的RGB数据和透明度数据分别进行编码,实现了在采用视频编码模式的同时保留了图片文件中的透明度数据,从而保证了图片文件的质量。
图1a是本申请实施例提供的一种图片文件处理方法的实施环境示意图。其中,计算设备10用于实现本申请任一实施例提供的图片文件 处理方法。该计算设备10与用户终端20之间通过网络30连接,所述网络30可以是有线网络,也可以是无线网络。
图1b为本申请一个实施例中用于实现一种图片文件处理方法的计算设备10的内部结构示意图。参照图1b,该计算设备10包括通过系统总线100011连接的处理器100012、非易失性存储介质100013和内存储器100014。其中,计算设备10的非易失性存储介质100013存储有操作系统1000131,还存储有一种图片文件处理装置1000132,该图片文件处理装置1000132用于实现本申请任一实施例提供的图片文件处理方法。计算设备10的处理器100012用于提供计算和控制能力,支撑整个终端设备的运行。计算设备10中的内存储器100014为非易失性存储介质100013中的图片文件处理装置的运行提供环境。该内存储器100014中可存储有计算机可读指令,该计算机可读指令被处理器100012执行时,可使得处理器100012执行本申请任一实施例提供的图片文件处理方法。该计算设备10可以是终端,也可以是服务器。终端可以是个人计算机或者移动电子设备,移动电子设备包括手机、平板电脑、个人数字助理或者穿戴式设备等中的至少一种。服务器可以用独立的服务器或者是多个物理服务器组成的服务器集群来实现。本领域技术人员可以理解,图1b中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算设备的限定,具体的计算设备可以包括比图1b中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
请参见图1c,为本申请实施例提供的一种图片文件处理方法的流程示意图,该方法可由上述计算设备执行。如图1c所示,假设该计算设备为终端设备,本申请实施例的所述方法可以包括步骤101至步骤104。
步骤101,获取图片文件中第一图像对应的RGBA数据,并分离所述RGBA数据,以得到所述第一图像的RGB数据和透明度数据。
具体的,在终端设备中运行的编码装置获取图片文件中第一图像对 应的RGBA数据,并分离所述RGBA数据,以得到所述第一图像的RGB数据和透明度数据。其中,所述第一图像对应的数据为RGBA数据。RGBA数据是代表红色(Red)、绿色(Green)、蓝色(Blue)和透明度信息(Alpha)的色彩空间。将所述第一图像对应的RGBA数据分离为RGB数据和透明度数据。所述RGB数据为所述RGBA数据包含的颜色数据,所述透明度数据为所述RGBA数据包含的透明度数据。
举例来说,若第一图像对应的数据为RGBA数据,由于第一图像是由很多的像素点组成的,每个像素点对应一个RGBA数据,因此,由N个像素点组成的第一图像包含N个RGBA数据,其形式如下:
RGBA RGBA RGBA RGBA RGBA RGBA……RGBA
因此,根据本申请实施例,所述编码装置需要对第一图像的RGBA数据进行分离,以获得第一图像的RGB数据和透明度数据,例如,将上述N个像素点组成的第一图像执行分离操作之后,获得N个像素点中每个像素点的RGB数据和每个像素点的透明度数据,其形式如下:
RGB RGB RGB RGB RGB RGB……RGB
AAAAAA……A
进一步的,在获得第一图像的RGB数据和透明度数据之后,分别执行步骤102和步骤103。
步骤102,按照第一视频编码模式对所述第一图像的RGB数据进行编码,生成第一码流数据。
具体的,编码装置按照第一视频编码模式对第一图像的RGB数据进行编码,生成第一码流数据。其中,所述第一图像可以为静态格式的图片文件所包含的一帧图像;或者,所述第一图像可以为动态格式的图片文件所包含的多帧图像中的任一帧图像。
步骤103,按照第二视频编码模式对所述第一图像的透明度数据进行编码,生成第二码流数据。
具体的,所述编码装置按照第二视频编码模式对所述第一图像的透明度数据进行编码,生成第二码流数据。
针对步骤102和步骤103而言,所述第一视频编码模式或第二视频编码模式可以包括但不限定于帧内预测(Intra-Prediction,I)帧编码模式和帧间预测(Prediction,P)帧编码模式。其中,I帧表示关键帧,对I帧数据解码时只需要本帧数据就可以重构完整图像;P帧需要参考前面的已编码帧,才可以重构完整图像。本申请实施例对静态格式的图片文件或动态格式的图片文件中各帧图像所采用的视频编码模式不做限定。
举例来说,对于静态格式的图片文件而言,由于静态格式的图片文件只包含一帧图像,在本申请实施例中即为第一图像,因此对第一图像的RGB数据和透明度数据进行I帧编码。又举例来说,对于动态格式的图片文件而言,由于动态格式的图片文件一般包含至少两帧图像,在本申请实施例中,对于动态格式的图片文件中的第一帧图像的RGB数据和透明度数据进行I帧编码;对于非第一帧图像的RGB数据和透明度数据即可以进行I帧编码,也可以进行P帧编码。
步骤104,将所述第一码流数据和所述第二码流数据写入所述图片文件的码流数据段中。
具体的,所述编码装置将由第一图像的RGB数据生成的第一码流数据,以及由第一图像的透明度数据生成的第二码流数据,写入图片文件的码流数据段中。其中,所述第一码流数据和所述第二码流数据为第一图像对应的完整的码流数据,即通过对第一码流数据和第二码流数据进行解码能够获得第一图像的RGBA数据。
需要说明的是,步骤102和步骤103在执行过程中并无先后顺序之分。
需要说明的是,本申请实施例中编码之前输入的RGBA数据可以是通过对各种格式的图片文件解码获得的,其中图片文件的格式可以为联合图像专家小组(Joint Photographic Experts Group,JPEG)、图像文件格式(Bitmap,BMP)、可移植网络图形格式(Portable Network Graphic Format,PNG)、位图动画格式(Animated Portable Network Graphics,APNG)、图像互换格式(Graphics Interchange Format,GIF)等格式中的任 一种,本申请实施例对编码之前的图片文件的格式不做限定。
需要说明的是,本申请实施例中的第一图像为包含RGB数据和透明度数据的RGBA数据,而对于第一图像仅包含RGB数据的情况,所述编码装置可以在获取到第一图像对应的RGB数据之后对RGB数据执行步骤102,以生成第一码流数据,并将第一码流数据确定为第一图像对应的完整的码流数据,这样依旧可以通过视频编码模式对仅包含RGB数据的第一图像进行编码,以实现对第一图像的压缩。
在本申请实施例中,在第一图像为RGBA数据的情况下,编码装置获取图片文件中第一图像对应的RGBA数据,并通过分离RGBA数据得到所述第一图像的RGB数据和透明度数据,按照第一视频编码模式对所述第一图像的RGB数据进行编码,生成第一码流数据;按照第二视频编码模式对第一图像的透明度数据进行编码,生成第二码流数据;将第一码流数据和第二码流数据写入码流数据段中。这样,通过采用视频编码模式编码能够提高图片文件的压缩率,减小图片文件的大小,因此可以提升图片加载速度,节省网络传输带宽以及存储成本;另外,通过对图片文件中的RGB数据和透明度数据分别进行编码,实现了在采用视频编码模式的同时保留了图片文件中的透明度数据,保证了图片文件的质量。
请参见图2,为本申请实施例提供的另一种图片文件处理方法的流程示意图,该方法可由前述计算设备执行。如图2所示,假设该计算设备为一终端设备,本申请实施例的所述方法可以包括步骤201至步骤207。本申请实施例是以动态格式的图片文件为例进行说明的,请参见以下具体介绍。
步骤201,获取动态格式的图片文件中第k帧对应的第一图像所对应的RGBA数据,并分离所述RGBA数据,以得到所述第一图像的RGB数据和透明度数据。
具体的,运行在终端设备中的编码装置获取待编码的动态格式的图 片文件,该动态格式的图片文件中包含至少两帧图像,所述编码装置获取该动态格式的图片文件中的第k帧对应的第一图像。其中,第k帧可以为所述至少两帧图像中的任意一帧,k为大于0的正整数。
根据本申请一些实施例,所述编码装置可以按照所述动态格式的图片文件中每一帧对应的图像的先后顺序进行编码,即可以先获取所述动态格式的图片文件的第一帧对应的图像。本申请实施例对所述编码装置获取所述动态格式的图片文件所包含的图像的顺序不做限定。
进一步地,若第一图像对应的数据为RGBA数据,RGBA数据是代表Red、Green、Blue和Alpha的色彩空间。将所述第一图像对应的RGBA数据分离为RGB数据和透明度数据。具体是:由于第一图像是由很多的像素点组成的,每个像素点对应一个RGBA数据,因此,由N个像素点组成的第一图像包含N个RGBA数据,其形式如下:
RGBA RGBA RGBA RGBA RGBA RGBA……RGBA
因此,所述编码装置需要对第一图像的RGBA数据进行分离,以获得第一图像的RGB数据和透明度数据,例如,将上述N个像素点组成的第一图像执行分离操作之后,获得N个像素点中每个像素点的RGB数据和每个像素点的透明度数据,其形式如下:
RGB RGB RGB RGB RGB RGB……RGB
AAAAAA……A
进一步的,在获得第一图像的RGB数据和透明度数据之后,分别执行步骤202和步骤203。
步骤202,按照第一视频编码模式对所述第一图像的RGB数据进行编码,生成第一码流数据。
具体的,所述编码装置按照第一视频编码模式对第一图像的RGB数据进行编码,生成第一码流数据。所述RGB数据是从所述第一图像对应的RGBA数据中分离得到的颜色数据。
步骤203,按照第二视频编码模式对所述第一图像的透明度数据进行编码,生成第二码流数据。
具体的,所述编码装置按照第二视频编码模式对所述第一图像的透明度数据进行编码,生成第二码流数据。其中,所述透明度数据是从所述第一图像对应的RGBA数据中分离得到的。
需要说明的是,步骤202和步骤203在执行过程中并无先后顺序之分。
步骤204,将所述第一码流数据和所述第二码流数据写入所述图片文件的码流数据段中。
具体的,所述编码装置将由第一图像的RGB数据生成的第一码流数据,以及由第一图像的透明度数据生成的第二码流数据,写入图片文件的码流数据段中。其中,所述第一码流数据和所述第二码流数据为第一图像对应的完整的码流数据,即通过对第一码流数据和第二码流数据进行解码能够获得第一图像的RGBA数据。
步骤205,判断第k帧是否为所述动态格式的图片文件中的最后一帧。
具体的,所述编码装置判断第k帧是否为所述动态格式的图片文件中最后一帧,若为最后一帧,则表示已完成对动态格式的图片文件的编码,继而执行步骤207;若不是最后一帧,则表示该动态格式的图片文件中还存在未被编码的图像,进而执行步骤206。
步骤206,若第k帧不是所述动态格式的图片文件中的最后一帧,则更新k,并触发执行获取动态格式的图片文件中第k帧对应的第一图像所对应的RGBA数据,并分离所述RGBA数据,以得到所述第一图像的RGB数据和透明度数据的操作。
具体的,所述编码装置判断第k帧不是所述动态格式的图片文件中的最后一帧,则对下一帧对应的图像进行编码,即采用(k+1)的数值更新k。在将k更新之后,触发执行获取动态格式的图片文件中第k帧对应的第一图像所对应的RGBA数据,并分离所述RGBA数据,以得到所述第一图像的RGB数据和透明度数据的操作。
可以理解的是,采用更新的k所获取的图像与k更新之前所获取的 图像并非是同一帧对应的图像,为了便于说明,这里将k更新之前的第k帧对应的图像设为第一图像,将k更新之后的第k帧对应的图像设为第二图像,以便于区别。
在本申请一些实施例中,在对第二图像执行步骤202至步骤204时,第二图像对应的RGBA数据中包含RGB数据和透明度数据,所述编码装置按照第三视频编码模式对第二图像的RGB数据进行编码,生成第三码流数据;按照第四视频编码模式对所述第二图像的透明度数据进行编码,生成第四码流数据;并将所述第三码流数据和所述第四码流数据写入图片文件的码流数据段中。
针对步骤202和步骤203而言,上述涉及到的所述第一视频编码模式、第二视频编码模式、第三视频编码模式或第四视频编码模式可以包括但不限定于I帧编码模式和P帧编码模式。其中,I帧表示关键帧,对I帧数据解码时只需要本帧数据就可以重构完整图像;P帧需要参考前面的已编码帧,才可以重构完整图像。本申请实施例对动态格式的图片文件中各帧图像中RGB数据和透明度数据所采用的视频编码模式不做限定。例如,同一帧图像中的RGB数据和透明度数据可以按照不同的视频编码模式进行编码;或者,可以按照相同的视频编码模式进行编码。不同帧图像中的RGB数据可以按照不同的视频编码模式进行编码;或者,可以按照相同的视频编码模式进行编码。不同帧图像中的透明度数据可以按照不同的视频编码模式进行编码;或者,可以按照相同的视频编码模式进行编码。
进一步需要说明的是,所述动态格式的图片文件包含有多个码流数据段,在本申请一些实施例中,一帧图像对应一个码流数据段;或者,在本申请另一些实施例中,一个码流数据对应一个码流数据段。因此,所述第一码流数据和第二码流数据写入的码流数据段与所述第三码流数据和第四码流数据写入的码流数据段不同。
举例来说,请一并参见图3,为本申请实施例提供的动态格式的图片文件中包含的多帧图像的示例图。如图3所示,图3是针对动态格式 的图片文件进行说明的,该动态格式的图片文件中包含多帧图像,例如,第1帧对应的图像、第2帧对应的图像、第3帧对应的图像、第4帧对应的图像等等,其中,每一帧对应的图像中包含RGB数据和透明度数据。在本申请一些实施例中,所述编码装置可以对第1帧对应的图像中的RGB数据和透明度数据分别按照I帧编码模式进行编码,对第2帧、第3帧、第4帧等其他帧分别对应的图像按照P帧编码模式进行编码,例如,第2帧对应图像中的RGB数据按照P帧编码模式进行编码,需要参考第1帧对应图像中的RGB数据,第2帧对应图像中的透明度数据按照P帧编码模式进行编码,需要参考第1帧对应图像中的透明度数据,以此类推,第3帧、第4帧等其他帧可以参考第2帧采用P帧编码模式进行编码。
需要说明的是,上述仅为动态格式的图片文件中以一种可选的编码方案进行编码;或者,所述编码装置还可以对第1帧、第2帧、第3帧、第4帧等均采用I帧编码模式进行编码。
步骤207,若第k帧是所述动态格式的图片文件中的最后一帧,则完成对所述动态格式的图片文件的编码。
具体的,所述编码装置判断第k帧为所述动态格式的图片文件中最后一帧,则表示完成对该动态格式的图片文件编码。
在本申请一些实施例中,所述编码装置可以对每一帧对应的图像所生成的码流数据生成帧头信息,并对所述动态格式的图片文件生成图片头信息,这样可以通过图片头信息确定该图片文件是否包含透明度数据,进而能够确定解码过程中是只获取由RGB数据生成的第一码流数据,还是获取由RGB数据生成的第一码流数据和由透明度数据生成的第二码流数据。
需要说明的是,本申请实施例的动态格式的图片文件中每一帧对应的图像为包含RGB数据和透明度数据的RGBA数据,而对于动态格式的图片文件中每一帧对应的图像仅包含RGB数据的情况,所述编码装置可以对每一帧图像的RGB数据执行步骤202,以生成第一码流数据以 及将所述第一码流数据写入图片文件的码流数据段中,最后将第一码流数据确定为第一图像对应的完整的码流数据,这样依旧可以通过视频编码模式对仅包含RGB数据的第一图像进行编码,以实现对第一图像的压缩。
又一需要说明的是,本申请实施例中编码之前输入的RGBA数据可以是通过对各种动态格式的图片文件解码获得的,其中图片文件的动态格式可以为APNG、GIF等格式中的任一种,本申请实施例对编码之前的图片文件的动态格式不做限定。
在本申请实施例中,在动态格式的图片文件中的第一图像为RGBA数据的情况下,编码装置获取图片文件中第一图像对应的RGBA数据,并通过分离RGBA数据得到所述第一图像的RGB数据和透明度数据,按照第一视频编码模式对所述第一图像的RGB数据进行编码,生成第一码流数据;按照第二视频编码模式对第一图像的透明度数据进行编码,生成第二码流数据;将第一码流数据和第二码流数据写入码流数据段中。另外,对于动态格式的图片文件中每一帧对应的图像均可以按照第一图像的方式实现编码。这样,通过采用视频编码模式编码能够提高图片文件的压缩率,减小图片文件的大小,因此可以提升图片加载速度,节省网络传输带宽以及存储成本;另外,通过对图片文件中的RGB数据和透明度数据分别进行编码,实现了在采用视频编码模式的同时保留了图片文件中的透明度数据,保证了图片文件的质量。
请参见图4a,为本申请实施例提供的另一种图片文件处理方法的流程示意图,该方法可由前述计算设备执行。如图4a所示,假设该计算设备为一终端设备,本申请实施例的所述方法可以包括步骤301至步骤307。
步骤301,获取图片文件中第一图像对应的RGBA数据,并分离所述RGBA数据,以得到所述第一图像的RGB数据和透明度数据。
具体的,运行在终端设备中的编码装置获取图片文件中第一图像对应的RGBA数据,并分离所述RGBA数据,以得到所述第一图像的RGB 数据和透明度数据。其中,所述第一图像对应的数据为RGBA数据。RGBA数据是代表Red、Green、Blue和Alpha的色彩空间。将所述第一图像对应的RGBA数据分离为RGB数据和透明度数据。所述RGB数据为所述RGBA数据包含的颜色数据,所述透明度数据为所述RGBA数据包含的透明度数据。
举例来说,若第一图像对应的数据为RGBA数据,由于第一图像是由很多的像素点组成的,每个像素点对应一个RGBA数据,因此,由N个像素点组成的第一图像包含N个RGBA数据,其形式如下:
RGBA RGBA RGBA RGBA RGBA RGBA……RGBA
因此,根据本申请实施例,所述编码装置需要对第一图像的RGBA数据进行分离,以获得第一图像的RGB数据和透明度数据,例如,将上述N个像素点组成的第一图像执行分离操作之后,获得N个像素点中每个像素点的RGB数据和每个像素点的透明度数据,其形式如下:
RGB RGB RGB RGB RGB RGB……RGB
AAAAAA……A
进一步的,在获得第一图像的RGB数据和透明度数据之后,分别执行步骤302和步骤303。
步骤302,按照第一视频编码模式对所述第一图像的RGB数据进行编码,生成第一码流数据。
具体的,编码装置按照第一视频编码模式对第一图像的RGB数据进行编码,生成第一码流数据。其中,所述第一图像可以为静态格式的图片文件所包含的一帧图像;或者,所述第一图像可以为动态格式的图片文件所包含的多帧图像中的任一帧图像。
在本申请一些实施例中,所述编码装置按照第一视频编码模式对第一图像的RGB数据进行编码并生成第一码流数据的具体过程是:将所述第一图像的RGB数据转换为第一YUV数据;按照第一视频编码模式对所述第一YUV数据进行编码,生成第一码流数据。在本申请一些实施例中,编码装置可以按照预设的YUV颜色空间格式将RGB数据转换 为第一YUV数据,例如,预设的YUV颜色空间格式可以包括但不限定于YUV420、YUV422和YUV444。
步骤303,按照第二视频编码模式对所述第一图像的透明度数据进行编码,生成第二码流数据。
具体的,所述编码装置按照第二视频编码模式对所述第一图像的透明度数据进行编码,生成第二码流数据。
针对步骤302的所述第一视频编码模式或步骤303的第二视频编码模式可以包括但不限定于I帧编码模式和P帧编码模式。其中,I帧表示关键帧,对I帧数据解码时只需要本帧数据就可以重构完整图像;P帧需要参考前面的已编码帧,才可以重构完整图像。本申请实施例对静态格式的图片文件或动态格式的图片文件中各帧图像所采用的视频编码模式不做限定。
举例来说,对于静态格式的图片文件而言,由于静态格式的图片文件只包含一帧图像,在本申请实施例中即为第一图像,因此对第一图像的RGB数据和透明度数据进行I帧编码。又举例来说,对于动态格式的图片文件而言,由于动态格式的图片文件包含至少两帧图像,在本申请实施例中,对于动态格式的图片文件中的第一帧图像的RGB数据和透明度数据进行I帧编码;对于非第一帧图像的RGB数据和透明度数据可以进行I帧编码,或者也可以进行P帧编码。
在本申请一些实施例中,所述编码装置按照第二视频编码模式对所述第一图像的透明度数据进行编码并生成第二码流数据的具体过程为:将所述第一图像的透明度数据转换为第二YUV数据;按照第二视频编码模式对所述第二YUV数据进行编码,生成第二码流数据。
其中,所述编码装置将所述第一图像的透明度数据转换为第二YUV数据具体是:在本申请一些实施例中,所述编码装置将所述第一图像的透明度数据设定为第二YUV数据中的Y分量,且不设定所述第二YUV数据中的U分量和V分量;或者,在本申请另外一些实施例中,将所述第一图像的透明度数据设定为第二YUV数据中的Y分量,且将所述第 二YUV数据中的U分量和V分量设定为预设数据;在本申请实施例中,所述编码装置可以按照预设的YUV颜色空间格式将透明度数据转换为第二YUV数据,例如,预设的YUV颜色空间格式可以包括但不限定于YUV400、YUV420、YUV422和YUV444,并可以按照该YUV颜色空间格式设定U分量和V分量。
进一步的,若第一图像对应的数据为RGBA数据,所述编码装置通过对第一图像的RGBA数据的分离,获得第一图像的RGB数据和透明度数据。接下来对将第一图像的RGB数据转换为第一YUV数据和将第一图像的透明度数据转换为第二YUV数据进行举例说明,以第一图像包含4个像素点为例进行说明,第一图像的RGB数据为这4个像素点的RGB数据,第一图像的透明度数据为这4个像素点的透明度数据,将第一图像的RGB数据和透明度数据进行转换的具体过程请参见图4b至图4d的举例说明。
请参见图4b所示,为本申请实施例提供的一种RGB数据转YUV数据的示例图。如图4b所示,RGB数据包含4个像素点的RGB数据,按照色彩空间转换模式对4个像素点的RGB数据进行转换,若YUV颜色空间格式为YUV444的情况,则按照相应的转换公式一个像素点的RGB数据能转换成一个YUV数据,这样4个像素点的RGB数据转换成了4个YUV数据,第一YUV数据包含这个4个YUV数据。其中,不同YUV颜色空间格式对应的转换公式不同。
进一步的,请参见图4c和图4d,分别为本申请实施例提供的一种透明度数据转YUV数据的示例图。首先,如图4c和4d所示,透明度数据包含4个像素点的A数据,其中A表示透明度,将每个像素点的透明度数据设定为Y分量;接着确定YUV颜色空间格式,以确定第二YUV数据。
若YUV颜色空间格式为YUV400,则不设置U、V分量,并将4个像素点的Y分量确定为所述第一图像的第二YUV数据(如图4c所示)。
若YUV颜色空间格式为除YUV400之外的其他存在U、V分量的 格式,则将U、V分量设定为预设数据,如图4d所示,图4d中是以YUV444的颜色空间格式进行转换的,即每一个像素点设置一个为预设数据的U分量和V分量。另外,还比如,YUV颜色空间格式为YUV422,则对每两个像素点设置一个为预设数据的U分量和V分量;或者,YUV颜色空间格式为YUV420,则对每四个像素点设置一个为预设数据的U分量和V分量。其他格式以此类推,在此不再赘述;最后将4个像素点的YUV数据确定为所述第一图像的第二YUV数据。
需要说明的是,步骤302和步骤303在执行过程中并无先后顺序之分。
步骤304,将所述第一码流数据和所述第二码流数据写入所述图片文件的码流数据段中。
具体的,所述编码装置将由第一图像的RGB数据生成的第一码流数据,以及由第一图像的透明度数据生成的第二码流数据,写入图片文件的码流数据段中。所述第一码流数据和所述第二码流数据为第一图像对应的完整的码流数据,即通过对第一码流数据和第二码流数据进行解码能够获得第一图像的RGBA数据。
步骤305,生成所述图片文件对应的图片头信息和帧头信息。
具体的,所述编码装置生成所述图片文件对应的图片头信息和帧头信息。其中,该图片文件可以为静态格式的图片文件,即仅包含该第一图像;或者,该图片文件为动态格式的图片文件,即包含所述第一图像以及其他图像。不论该图片文件是静态格式的图片文件还是动态格式的图片文件,所述编码装置都需要生成该图片文件对应的图片头信息。其中,所述图片头信息包括指示所述图片文件是否存在透明度数据的图像特征信息,以使解码装置通过所述图像特征信息确定所述图片文件是否包含透明度数据从而确定如何获取码流数据以及获取到的码流数据是否包含由透明度数据生成的第二码流数据。
进一步的,所述帧头信息用于指示所述图片文件的码流数据段,以使解码装置通过帧头信息确定能够获取到码流数据的码流数据段,进而 实现对码流数据的解码。
需要说明的是,本申请实施例对步骤305生成所述图片文件对应的图片头信息和帧头信息与步骤302、步骤303、步骤304的先后顺序不做限定。
步骤306,将所述图片头信息写入所述图片文件的图片头信息数据段中。
具体的,所述编码装置将所述图片头信息写入所述图片文件的图片头信息数据段中。其中,所述图片头信息包括图像文件标识符、解码器标识符、版本号和所述图像特征信息;所述图像文件标识符用于表示所述图片文件的类型,所述解码器标识符用于表示所述图片文件采用的编解码标准的标识;所述版本号用于表示所述图片文件采用的编解码标准的档次。
在本申请一些实施例中,所述图片头信息还可以包括用户自定义信息数据段,所述用户自定义信息数据段包括所述用户自定义信息起始码、所述用户自定义信息数据段的长度和用户自定义信息;所述用户自定义信息包括可交换图像文件(Exchangeable Image File,EXIF)信息,例如拍摄时的光圈、快门、白平衡、国际标准化组织(International Organization for Standardization,ISO)、焦距、日期时间等和拍摄条件以及相机品牌、型号、色彩编码、拍摄时录制的声音以及全球定位系统数据、缩略图等,用户自定义信息包含了可以由用户自定义而设定的信息,本申请实施例对此不做限定。
其中,所述图像特征信息还包括所述图像特征信息起始码、所述图像特征信息数据段长度、所述图片文件是否为静态格式的图片文件、所述图片文件是否为动态格式的图片文件、所述图片文件是否为无损编码、所述图片文件采用的YUV颜色空间值域、所述图片文件的宽度、所述图片文件的高度和用于指示若所述图片文件为动态格式的图片文件的帧数。在本申请一些实施例中,所述图像特征信息还可以包括所述图片文件采用的YUV颜色空间格式。
举例来说,请参见图5a,为本申请实施例提供的一种图片头信息的示例图。如图5a所示,一图片文件的图片头信息由图像序列头数据段、图像特征信息数据段、用户自定义信息数据段三部分组成。
其中,图像序列头数据段包括图像文件标识符、解码器标识符、版本号和所述图像特征信息。
图像文件标识符(image_identifier):用来表示该图片文件的类型,可以通过预设标识来表示,例如图像文件标识符占用4个字节,比如该图像文件标识符为位串‘AVSP’,用来标识这是一个AVS图片文件。
解码器标识符:用来表示对当前的图片文件进行压缩所采用的编解码标准的标识,例如,采用4字节表示。或者也可以解释为表示当前图片解码采用的解码器内核型号,当采用AVS2内核时,解码器标识符code_id为‘AVS2’。
版本号:用来表示压缩标准标识指示的编解码标准的档次,例如,档次可以包括基本档次(Baseline Profile)、主要档次(Main Profile)、扩展档次(Extended Profile)等等。例如,采用8位无符号数标识,如表一所示,给出了版本号的类型。
表一
版本号的取值 档次
‘B’ Base Profile
‘M’ Main Profile
‘H’ High Profile
请一并参见图5b,为本申请实施例提供的一种图像特征信息数据段的示例图,如图5b所示,图像特征信息数据段包括图像特征信息起始码、图像特征信息数据段长度、是否有alpha通道标志(即图5b中所示的图像透明度标志)、动态图像标志、YUV颜色空间格式、无损模式标识、YUV颜色空间值域标志、保留位、图像宽度、图像高度和帧数。请参见以下具体介绍。
图像特征信息起始码:是用于指示图片文件的图像特征信息数据段 起始位置的字段,例如,采用1字节表示,并采用字段D0。
图像特征信息数据段长度:表示图像特征信息数据段所占的字节数,例如,采用2字节表示,比如,对于动态格式的图片文件而言,图5b中的图像特征信息数据段一共有9个字节,可以填写9;对于静态格式的图片文件而言图5b中的图像特征信息数据段一共有12个字节,可以填写12。
图像透明度标志:用于表示该图片文件中的图像是否携带有透明度数据。例如,采用一个比特表示,0表示该图片文件中的图像没有携带透明度数据,1表示该图片文件中的图像携带有透明度数据;可以理解的是,是否有alpha通道与是否包含透明度数据是代表相同的意思。
动态图像标志:用于表示所述图片文件是否是动态格式的图片文件和是否为静态格式的图片文件,例如,采用一个比特表示,0表示是静态格式的图片文件,1表示是动态格式的图片文件。
YUV颜色空间格式:用于指示图片文件的RGB数据转换为YUV数据所采用的色度分量格式,例如,采用两个比特表示,如下表二所示。
表二
YUV_颜色空间格式的值 YUV颜色空间格式
00 4:0:0
01 4:2:0
10 4:2:2(保留)
11 4:4:4
无损模式标志:用于表示是否为无损编码或是否为有损压缩,例如,采用一个比特表示,0表示有损编码,1表示无损编码,其中,对于图片文件中的RGB数据直接采用视频编码模式进行编码,则表示是无损编码;对于图片文件中的RGB数据采用先转换为YUV数据,再对YUV数据进行编码,则表示是有损编码。
YUV颜色空间值域标志:用于表示YUV颜色空间值域范围符合 ITU-R BT.601标准。例如采用一个比特表示,1表示Y分量的值域范围为[16,235],U、V分量的值域范围为[16,240];0表示Y分量和U、V分量的值域范围为[0,255]。
保留位:10位无符号整数。将字节中的多余比特位设定为保留比特位。
图像宽度:用来表示图片文件中每个图像的宽度,例如,若图像宽度范围在0-65535之间,可以通过2个字节表示。
图像高度:用来表示图片文件中每个图像的高度,例如,又或者,若图像高度范围在0-65535之间,可以通过2个字节表示。
图像帧数:只有在动态格式的图片文件情况下才会存在,用来表示图片文件所包含的总帧数,例如,采用3字节表示。
请一并参见图5c,为本申请实施例提供的一种用户自定义信息数据段的示例图,如图5c所示,具体参见以下详细介绍。
用户自定义信息起始码:是用于指示用户自定义信息起始位置的字段,例如,采用1字节表示,如,位串‘0x000001BC’标识用户自定义信息的开始。
用户自定义信息数据段长度:表示当前用户自定义信息的数据长度,例如,采用2字节表示。
用户自定义信息:用来写入用户需要传入的数据,例如EXIF等信息,占用的字节数可以根据用户自定义信息的长度来确定。
需要说明的是,以上仅为举例说明,本申请实施例对图片头信息包含的各个信息的名称、各个信息在图片头信息中的位置以及表示各个信息所占用的比特数不做限定。
步骤307,将所述帧头信息写入所述图片文件的帧头信息数据段中。
具体的,所述编码装置将所述帧头信息写入所述图片文件的帧头信息数据段中。
在本申请一些实施例中,图片文件的一帧图像对应一个帧头信息。具体的,针对图片文件为静态格式的图片文件的情况,静态格式的图片 文件包含一帧图像,即为第一图像,因此,该静态格式的图片文件中包含一个帧头信息。针对图片文件为动态格式的图片文件的情况,动态格式的图片文件一般包含至少两帧图像,对于其中的每一帧图像均增加一个帧头信息。
请参见图6a,为本申请实施例提供的一种静态格式的图片文件的封装示例图。如图6a所示,该图片文件包含图片头信息数据段、帧头信息数据段、码流数据段。一静态格式的图片文件包含图片头信息、帧头信息和表示图片文件的图像的码流数据,这里的码流数据包含由该帧图像的RGB数据生成的第一码流数据和由该帧图像的透明度数据生成的第二码流数据。将各个信息或数据写入对应的数据段中,例如,将图片头信息写入图片头信息数据段;将帧头信息写入帧头信息数据段;将码流数据写入码流数据段。需要说明的是,由于码流数据段中的第一码流数据和第二码流数据是通过视频编码模式得到的,因此码流数据段可以采用视频帧数据段来描述,这样在视频帧数据段中写入的信息为对该静态格式的图像文件进行编码得到的第一码流数据和第二码流数据。
请参见图6b,为本申请实施例提供的一种动态格式的图片文件的封装示例图。如图6b所示,该图片文件包含图片头信息数据段、多个帧头信息数据段以及多个码流数据段。一动态格式的图片文件包含图片头信息、多个帧头信息和表示多帧图像的码流数据。其中,一帧图像对应的码流数据对应一个帧头信息,其中,表示每一帧图像的码流数据包含由该帧图像的RGB数据生成的第一码流数据和由该帧图像的透明度数据生成的第二码流数据。将各个信息或数据写入对应的数据段中,例如,将图片头信息写入图片头信息数据段;将第1帧对应的帧头信息写入第1帧对应的帧头信息数据段;将第1帧对应的码流数据写入第1帧对应的码流数据段,以此类推,实现将多帧对应的帧头信息写入各个帧对应的帧头信息段中,以及将多帧对应的码流数据写入各个帧对应的码流数据段中。需要说明的是,由于码流数据段中的第一码流数据和第二码流数据是通过视频编码模式得到的,因此码流数据段可以采用视频帧数据 段来描述,这样在每一帧图像对应的视频帧数据段中写入的信息为对该该帧图像进行编码得到的第一码流数据第二码流数据。
在本申请另外一些实施例中,图片文件的一帧图像中的一个码流数据对应一个帧头信息。具体的,针对静态格式的图片文件的情况,静态格式的图片文件包含一帧图像,即为第一图像,包含透明度数据的第一图像对应于两个码流数据,分别为第一码流数据和第二码流数据,因此,该静态格式的图片文件中第一码流数据对应一个帧头信息、第二码流数据对应另一个帧头信息。针对动态格式的图片文件的情况,动态格式的图片文件包含至少两帧图像,包含透明度数据的每一帧图像对应于两个码流数据,分别为第一码流数据和第二码流数据,并对每一帧图像的第一码流数据和第二码流数据各增加一个帧头信息。
请参见图7a,为本申请实施例提供的另一种静态格式的图片文件的封装示例图。为了区分第一码流数据对应的帧头信息和第二码流数据对应的帧头信息,在这里用图像帧头信息和透明通道帧头信息进行区分,其中,由RGB数据生成的第一码流数据与图像帧头信息对应,由透明度数据生成的第二码流数据与透明通道帧头信息对应。如图7a所示,该图片文件包含图片头信息数据段、第一码流数据对应的图像帧头信息数据段、第一码流数据段、第二码流数据对应的透明通道帧头信息数据段、第二码流数据段。一静态格式的图片文件包含图片头信息、两个帧头信息和表示一帧图像的第一码流数据和第二码流数据,其中,第一码流数据是由该帧图像的RGB数据生成,第二码流数据是由该帧图像的透明度数据生成的。将各个信息或数据写入对应的数据段中,例如,将图片头信息写入图片头信息数据段;将第一码流数据对应的图像帧头信息写入第一码流数据对应的图像帧头信息数据段;将第一码流数据写入第一码流数据段;将第二码流数据对应的透明通道帧头信息写入第二码流数据对应的透明通道帧头信息数据段;将第二码流数据写入第二码流数据段。在本申请一些实施例中,第一码流数据对应的图像帧头信息数据段和第一码流数据段可以设定为图像帧数据段,第二码流数据对应的透明 通道帧头信息数据段和第二码流数据段可以设定为透明通道帧数据段,本申请实施例对各个数据段的名称和各个数据段相结合后的数据段名称不做限定。
在本申请一些实施例中,对于图片文件的一帧图像中的一个码流数据对应一个帧头信息的情况,所述编码装置可以按照预设的顺序来排列第一码流数据对应的帧头信息数据段、第一码流数据段、第二码流数据对应的帧头信息数据段和第二码流数据段;例如,对于一帧图像的第一码流数据段、第二码流数据段和各个码流数据对应的帧头信息数据段,可以按照第一码流数据对应的帧头信息数据段、第一码流数据段、第二码流数据对应的帧头信息数据段、第二码流数据段进行排列,这样在解码装置解码的过程中,能够确定表示该帧图像的两个帧头信息和两个帧头指示的码流数据段中,哪一个能够获取到第一码流数据,哪一个能获取到第二码流数据。可以理解的是,这里的第一码流数据是指由RGB数据生成的码流数据,第二码流数据是指由透明度数据生成的码流数据。
请参见图7b,为本申请实施例提供的另一种动态格式的图片文件的封装示例图。为了区分第一码流数据对应的帧头信息和第二码流数据对应的帧头信息,在这里用图像帧头信息和透明通道帧头信息进行区分,其中,由RGB数据生成的第一码流数据与图像帧头信息对应,由透明度数据生成的第二码流数据与透明通道帧头信息对应。如图7b所示,该图片文件包含图片头信息数据段、多个帧头信息数据段以及多个码流数据段。一动态格式的图片文件包含图片头信息、多个帧头信息和表示多帧图像的码流数据。其中,一帧图像对应的第一码流数据和第二码流数据分别对应一个帧头信息,其中,第一码流数据是由该帧图像的RGB数据生成的,第二码流数据是由该帧图像的透明度数据生成的。将各个信息或数据写入对应的数据段中。例如,将图片头信息写入图片头信息数据段;将第1帧中第一码流数据对应的图像帧头信息写入第1帧中的第一码流数据对应的图像帧头信息数据段;将第1帧对应的第一码流数据写入第1帧中的第一码流数据段;将第1帧中第二码流数据对应的透 明通道帧头信息写入第1帧中的第二码流数据对应的透明通道帧头信息数据段;将第1帧对应的第二码流数据写入第1帧中的第二码流数据段,以此类推,实现将多帧中各个码流数据对应的帧头信息写入各个帧中相应码流数据对应的帧头信息数据段中,以及将多帧中的各个码流数据写入各个帧中相应码流数据对应的码流数据段中。在本申请一些实施例中,第一码流数据对应的图像帧头信息数据段和第一码流数据段可以设定为图像帧数据段,第二码流数据对应的透明通道帧头信息数据段和第二码流数据段可以设定为透明通道帧数据段,本申请实施例对各个数据段的名称和各个数据段相结合后的数据段名称不做限定。
进一步的,所述帧头信息包括所述帧头信息起始码和用于指示若所述图片文件为动态格式的图片文件的延迟时间信息。在本申请一些实施例中,所述帧头信息还包括所述帧头信息数据段长度和所述帧头信息所指示的码流数据段的码流数据段长度中的至少一项。进一步地,在本申请一些实施例中,所述帧头信息还包括区别于其他帧图像的特有信息,如编码区域信息、透明度信息、颜色表等等,本申请实施例对此不做限定。
对于将一帧图像编码后得到的第一码流数据和第二码流数据对应一个帧头信息的情况,所述帧头信息可以参考图8a所示的帧头信息的示例图,如图8a所示,请参见以下具体介绍。
帧头信息起始码:是用于指示帧头信息起始位置的字段,例如,采用1字节表示。
帧头信息数据段长度:表示帧头信息的长度,例如,采用1字节表示,该信息是可选信息。
码流数据段长度:表示所述帧头信息所指示的码流数据段的码流长度,其中,对于第一码流数据和第二码流数据对应于一个帧头信息的情况,则这里的码流长度为第一码流数据的长度和第二码流数据的长度的总和,该信息是可选信息。
延迟时间信息:只有当图片文件为动态格式的图片文件时才存在, 表示显示当前帧对应的图像与显示下一帧对应的图像的时间差,例如,采用1字节表示。
需要说明的是,以上仅为举例说明,本申请实施例对帧头信息包含的各个信息的名称、各个信息在帧头信息中的位置以及表示各个信息所占用的比特数不做限定。
对于第一码流数据和第二码流数据分别对应一个帧头信息的情况,帧头信息分为图像帧头信息和透明通道帧头信息,请一并参见图8b和图8c。
如图8b所示,为本申请实施例提供了一种图像帧头信息的示例图。所述图像帧头信息包括所述图像帧头信息起始码和用于指示若所述图片文件为动态格式的图片文件的延迟时间信息。在本申请一些实施例中,所述图像帧头信息还包括所述图像帧头信息数据段长度和所述图像帧头信息所指示的第一码流数据段的第一码流数据段长度中的至少一项。进一步,在本申请一些实施例中,所述图像帧头信息还包括区别于其他帧图像的特有信息,如编码区域信息、透明度信息、颜色表等等,本申请实施例对此不做限定。
图像帧头信息起始码:是用于指示图像帧头信息起始位置的字段,例如,采用1字节表示,如位串‘0x000001BA’。
图像帧头信息数据段长度:表示图像帧头信息的长度,例如,采用1字节表示,该信息是可选信息。
第一码流数据段长度:表示所述图像帧头信息所指示的第一码流数据段的码流长度,该信息是可选信息。
延迟时间信息:只有当图片文件为动态格式的图片文件时才存在,表示显示当前帧对应的图像与显示下一帧对应的图像的时间差,例如,采用1字节表示。
如图8c所示,为本申请实施例提供的一种透明通道帧头信息的示例图。所述透明通道帧头信息包括所述透明通道帧头信息起始码。在本申请一些实施例中,所述透明通道帧头信息还包括所述透明通道帧头信息 数据段长度、所述透明通道帧头信息所指示的第二码流数据段的第二码流数据段长度和用于指示若所述图片文件为动态格式的图片文件的延迟时间信息中的至少一项。进一步,在本申请一些实施例中,所述透明通道帧头信息还包括区别于其他帧图像的特有信息,如编码区域信息、透明度信息、颜色表等等,本申请实施例对此不做限定。
透明通道帧头信息起始码:是用于指示透明通道帧头信息起始位置的字段,例如,采用1字节表示,如位串‘0x000001BB’。
透明通道帧头信息数据段长度:表示透明通道帧头信息的长度,例如,采用1字节表示,该信息是可选信息。
第二码流数据段长度:表示所述透明通道帧头信息所指示的第二码流数据段的码流长度,该信息是可选信息。
延迟时间信息:只有当图片文件为动态格式的图片文件时才存在,表示显示当前帧对应的图像与显示下一帧对应的图像的时间差,例如,采用1字节表示。该信息是可选信息。透明通道帧头信息在不包含延迟时间信息的情况下,可以参考图像帧头信息中的延迟时间信息。
在本申请实施例中,图片文件、图像、第一码流数据、第二码流数据、图片头信息、帧头信息以及图片头信息包含的各个信息、帧头信息包含的各个信息等词可以以其他名称出现,例如,图片文件采用“图片”来描述,只要各个词的功能和本申请类似,则属于本申请权利要求及其等同技术的保护范围之内。
又一需要说明的是,本申请实施例中编码之前输入的RGBA数据可以是通过对各种格式的图片文件解码获得的,其中图片文件的格式可以为JPEG、BMP、PNG、APNG、GIF等格式中的任一种,本申请实施例对编码之前的图片文件的格式不做限定。
需要说明的是,本申请实施例中的每个起始码的形式在整个压缩图像数据中是唯一的,以起到唯一识别各个数据段的作用。本申请实施例中涉及到的图片文件用于表示一个完整的图片文件或图像文件,它可以包含一幅或多幅图像,图像是指一帧图画。本申请实施例中涉及到的视 频帧数据是图片文件中的每一帧图像通过视频编码后得到的码流数据,例如,对RGB数据编码之后得到的第一码流数据可以看做一个视频帧数据,对透明度数据编码之后得到的第二码流数据也可以看做一个视频帧数据。
在本申请实施例中,在第一图像为RGBA数据的情况下,编码装置获取图片文件中第一图像对应的RGBA数据,并通过分离RGBA数据得到所述第一图像的RGB数据和透明度数据,按照第一视频编码模式对所述第一图像的RGB数据进行编码,生成第一码流数据;按照第二视频编码模式对第一图像的透明度数据进行编码,生成第二码流数据;并生成包含第一图像的图片文件对应的图片头信息和帧头信息;最后将第一码流数据和第二码流数据写入码流数据段中、将图片头信息写入图片头信息数据段、将帧头信息写入帧头信息数据段。这样,通过采用视频编码模式编码能够提高图片文件的压缩率,减小图片文件的大小,因此可以提升图片加载速度,节省网络传输带宽以及存储成本;另外,通过对图片文件中的RGB数据和透明度数据分别进行编码,实现了在采用视频编码模式的同时保留了图片文件中的透明度数据,保证了图片文件的质量。
请参见图9,为本申请实施例提供的一种图片文件处理方法的流程示意图,该方法可由前述计算设备执行。如图9所示,假设该计算设备为一终端设备,本申请实施例的所述方法可以包括步骤401至步骤404。
步骤401,从图片文件的码流数据段中获取由所述图片文件中的第一图像生成的第一码流数据和第二码流数据。
具体的,在终端设备中运行的解码装置从图片文件的码流数据段中获取由所述图片文件中的第一图像生成的第一码流数据和第二码流数据。
步骤402,按照第一视频解码模式对第一码流数据进行解码,生成第一图像的RGB数据。
具体的,在终端设备中运行的解码装置按照第一视频解码模式对第一码流数据进行解码。其中,所述第一码流数据和所述第二码流数据是所述解码装置通过对图片文件进行解析从码流数据段中读取由所述第一图像生成的数据,并获取到关于第一图像的码流数据,所述第一图像为所述图像文件中包含的图像。对于图像文件包含透明度数据的情况,所述解码装置获取表示第一图像的第一码流数据和第二码流数据。所述第一图像可以为静态格式的图片文件所包含的一帧图像;或者,所述第一图像可以为动态格式的图片文件所包含的多帧图像中的任一帧图像。
在本申请一些实施例中,针对所述图片文件包含RGB数据和透明度数据的情况,所述图片文件中存在用于指示码流数据段的信息,以及对于动态格式的图片文件而言,所述图片文件中存在用于指示不同帧图像对应的码流数据段的信息,以使所述解码装置能够获取到由第一图像的RGB数据生成的第一码流数据以及由第一图像的透明度数据生成的第二码流数据。
进一步的,所述解码装置对所述第一码流数据进行解码,以生成第一图像的RGB数据。
步骤403,按照第二视频解码模式对第二码流数据进行解码,生成所述第一图像的透明度数据。
具体的,所述解码装置按照第二视频解码模式对第二码流数据进行解码,生成所述第一图像的透明度数据。其中,所述第二码流数据也同步骤402中第一码流数据的读取方式相同,在此不再赘述。
针对步骤402和步骤403而言,所述第一视频解码模式或第二视频解码模式可以是根据生成第一码流数据或生成第二码流数据所采用的视频编码模式确定的,例如,以第一码流数据为例进行说明,若所述第一码流数据采用I帧编码,则所述第一视频解码模式为根据当前的码流数据就可以生成RGB数据;若所述第一码流数据采用P帧编码,则所述第一视频解码模式为根据前面已解码的数据,生成当前帧的RGB数据。第二视频解码模式可以参考第一视频解码模式的介绍,在此不再赘 述。
需要说明的是,步骤402和步骤403在执行过程中并无先后顺序之分。
步骤404,根据所述第一图像的所述RGB数据和所述透明度数据,生成所述第一图像对应的RGBA数据。
具体的,所述解码装置根据所述第一图像的所述RGB数据和所述透明度数据,生成所述第一图像对应的RGBA数据。其中,RGBA数据是代表Red、Green、Blue和Alpha的色彩空间。RGB数据和透明度数据能够合成为RGBA数据。这样能够将按照视频编码模式编码得到的码流数据,通过对应的视频解码模式生成相应的RGBA数据,实现了在采用视频编解码模式的同时保留了图片文件中的透明度数据,保证了图片文件的质量和展示效果。
举例来说,所述解码装置解码获得的第一图像的RGB数据和透明度数据,形式如下:
RGB RGB RGB RGB RGB RGB……RGB
AAAAAA……A
则所述解码装置将相对应的RGB数据和透明度数据进行合并,以得到第一图像的RGBA数据,其形式如下:
RGBA RGBA RGBA RGBA RGBA RGBA……RGBA
需要说明的是,本申请实施例中的图片文件是包含RGB数据和透明度数据的情况,因此能够通过解析图片文件读取到可以生成RGB数据的第一码流数据和生成透明度数据的第二码流数据,进而分别执行步骤402和步骤403。而对于图片文件仅包含RGB数据的情况,能够通过解析图片文件读取到可以生成RGB数据的第一码流数据,并执行步骤402,生成RGB数据即完成了对第一码流数据的解码。
在本申请实施例中,解码装置按照第一视频解码模式对第一码流数据进行解码,生成第一图像的RGB数据;按照第二视频解码模式对第二码流数据进行解码,生成第一图像的透明度数据;根据第一图像的 RGB数据和透明度数据,生成第一图像对应的RGBA数据。通过对图片文件中的第一码流数据和第二码流数据分别进行解码,进而获得RGBA数据,实现了在采用视频编解码模式的同时保留了图片文件中的透明度数据,从而保证了图片文件的质量。
请参见图10,为本申请实施例提供的另一种图片文件处理方法的流程示意图,该方法可由前述计算设备执行。如图10所示,假设该计算设备为一终端设备,本申请实施例的所述方法可以包括步骤501至步骤507。本申请实施例是以动态格式的图片文件为例进行说明的,请参见以下具体介绍。
步骤501,获取动态格式的图片文件中由第k帧对应的第一图像生成的第一码流数据和第二码流数据。
具体的,运行在终端设备中的解码装置通过对所述动态格式的图片文件进行解析,从图片文件的码流数据段中获取由第k帧对应的第一图像生成的第一码流数据和第二码流数据。其中,对于图像文件包含透明度数据的情况,所述解码装置获取表示第一图像的第一码流数据和第二码流数据。该动态格式的图片文件中包含至少两帧图像,第k帧可以为所述至少两帧图像中的任意一帧。其中,k为大于0的正整数。
在本申请一些实施例中,针对动态格式的图片文件包含RGB数据和透明度数据的情况,所述图片文件中存在用于指示不同帧图像对应的码流数据段的信息,以使所述解码装置能够获取到由第一图像的RGB数据生成的第一码流数据以及由第一图像的透明度数据生成的第二码流数据。
在本申请一些实施例中,所述解码装置可以按照所述动态格式的图片文件中每一帧对应的码流数据的先后顺序进行解码,即可以先获取所述动态格式的图片文件的第一帧对应的码流数据进行解码。本申请实施例对所述解码装置获取所述动态格式的图片文件的表示各帧图像的码流数据的顺序不做限定。
在本申请一些实施例中,所述解码装置可以通过图片文件的图片头信息和帧头信息确定表示每一帧对应的图像的码流数据,可以参见下一个实施例中关于图片头信息和帧头信息的具体介绍。
步骤502,按照第一视频解码模式对第一码流数据进行解码,生成第一图像的RGB数据。
具体的,所述解码装置按照第一视频解码模式对第一码流数据进行解码,生成第一图像的RGB数据。在本申请一些实施例中,所述解码装置按照第一视频解码模式对所述第一码流数据进行解码,生成第一图像的第一YUV数据;将所述第一YUV数据转换为所述第一图像的RGB数据。
步骤503,按照第二视频解码模式对第二码流数据进行解码,生成所述第一图像的透明度数据。
具体的,所述解码装置按照第二视频解码模式对第二码流数据进行解码,生成所述第一图像的透明度数据。在本申请一些实施例中,所述解码装置按照第二视频解码模式对所述第二码流数据进行解码,生成所述第一图像的第二YUV数据;将所述第二YUV数据转换为所述第一图像的透明度数据。在本申请一些实施例中,所述解码装置将所述第二YUV数据中的Y分量设定为所述第一图像的所述透明度数据,且舍弃所述第二YUV数据中的U分量和V分量。
需要说明的是,步骤502和步骤503在执行过程中并无先后顺序之分。
步骤504,根据所述第一图像的所述RGB数据和所述透明度数据,生成所述第一图像对应的RGBA数据。
具体的,所述解码装置根据所述第一图像的所述RGB数据和所述透明度数据,生成所述第一图像对应的RGBA数据。其中,RGBA数据是代表Red、Green、Blue和Alpha的色彩空间。RGB数据和透明度数据能够合成为RGBA数据。这样能够将按照视频编码模式编码得到的码流数据,通过对应的视频解码模式生成相应的RGBA数据,实现了在采 用视频编解码模式的同时保留了图片文件中的透明度数据,保证了图片文件的质量和展示效果。
举例来说,所述解码装置解码获得的第一图像的RGB数据和透明度数据,形式如下:
RGB RGB RGB RGB RGB RGB……RGB
AAAAAA……A
则所述解码装置将相对应的RGB数据和透明度数据进行合并,以得到第一图像的RGBA数据,其形式如下:
RGBA RGBA RGBA RGBA RGBA RGBA……RGBA
步骤505,判断第k帧是否为所述动态格式的图片文件的最后一帧。
具体的,所述解码装置判断第k帧是否为所述动态格式的图片文件的最后一帧。在本申请一些实施例中,可以通过检测图片头信息中包含的帧数来确定是否完成对图片文件的解码。若第k帧为所述动态格式的图片文件的最后一帧,则表示完成对所述动态格式的图片文件的解码,执行步骤507;若第k帧不是所述动态格式的图片文件的最后一帧,则执行步骤506。
步骤506,若第k帧不是所述动态格式的图片文件的最后一帧,则更新k,并触发执行获取动态格式的图片文件中第k帧对应的第一图像的第一码流数据和第二码流数据的操作。
具体的,若所述解码装置判断第k帧不是所述动态格式的图片文件的最后一帧,则对下一帧对应的图像的码流数据进行解码,即采用(k+1)的数值更新k。在将k更新之后,触发执行获取动态格式的图片文件中第k帧对应的第一图像的第一码流数据和第二码流数据的操作。
可以理解的是,采用更新的k所获取的图像与k更新之前所获取的图像并非是同一帧对应的图像,为了便于说明,这里将k更新之前的第k帧对应的图像设为第一图像,将k更新之后的第k帧对应的图像设为第二图像,以便于区别。
在对第二图像执行步骤502至步骤504时,在本申请一些实施例中, 表示第二图像的码流数据为第三码流数据和第四码流数据;按照第三视频解码模式对所述第三码流数据进行解码,生成所述第二图像的RGB数据;按照第四视频解码模式对所述第四码流数据进行解码,生成所述第二图像的透明度数据,其中,第三码流数据是根据第二图像的RGB数据生成的,第四码流数据是根据第二图像的透明度数据生成的;根据所述第二图像的所述RGB数据和所述透明度数据,生成所述第二图像对应的RGBA数据。
针对步骤502和步骤503而言,上述涉及到的所述第一视频解码模式、第二视频解码模式、第三视频解码模式或第四视频解码模式是根据生成码流数据所采用的视频编码模式确定的。例如,以第一码流数据为例进行说明,若所述第一码流数据采用I帧编码,则所述第一视频解码模式为根据当前的码流数据就可以生成RGB数据;若所述第一码流数据采用P帧编码,则所述第一视频解码模式为根据前面已解码的数据,生成当前帧的RGB数据。对于其他视频解码模式可以参考第一视频解码模式的介绍,在此不再赘述。
进一步需要说明的是,所述动态格式的图片文件包含有多个码流数据段,在本申请一些实施例中,一帧图像对应一个码流数据段;或者,在本申请另一些实施例中,一个码流数据对应一个码流数据段。因此,读取所述第一码流数据和第二码流数据的码流数据段与读取所述第三码流数据和第四码流数据的码流数据段不同。
步骤507,若第k帧是所述动态格式的图片文件的最后一帧,则完成对所述动态格式的图片文件的解码。
具体的,若所述解码装置判断第k帧是所述动态格式的图片文件的最后一帧,则表示完成对该动态格式的图片文件解码。
在本申请一些实施例中,所述解码装置可以解析图片文件,得到所述动态格式的图片文件的图片头信息和帧头信息,这样可以通过图片头信息确定该图片文件是否包含透明度数据,进而能够确定解码过程中是只获取由RGB数据生成的第一码流数据,还是获取由RGB数据生成的 第一码流数据和由透明度数据生成的第二码流数据。
需要说明的,本申请实施例的动态格式的图片文件中每一帧对应的图像为包含RGB数据和透明度数据的RGBA数据,而对于动态格式的图片文件中每一帧对应的图像仅包含RGB数据的情况,表示每一帧图像的码流数据仅仅是第一码流数据,因此所述解码装置可以对表示每一帧图像的第一码流数据执行步骤502,以生成RGB数据。这样依旧可以通过视频解码模式对仅包含RGB数据的码流数据进行解码。
在本申请实施例中,在确定动态格式的图片文件中包含RGB数据和透明度数据的情况下,解码装置按照第一视频解码模式对表示每一帧图像中的第一码流数据进行解码,生成第一图像的RGB数据;按照第二视频解码模式对表示每一帧图像中的第二码流数据进行解码,生成第一图像的透明度数据;根据第一图像的RGB数据和透明度数据,生成第一图像对应的RGBA数据。通过对图片文件中的第一码流数据和第二码流数据分别进行解码,进而获得RGBA数据,实现了在采用视频编解码模式的同时保留了图片文件中的透明度数据,保证了图片文件的质量。
请参见图11,为本申请实施例提供的另一种图片文件处理方法的流程示意图,该方法可由前述计算设备执行。如图11所示,假设该计算设备为一终端设备,本申请实施例的所述方法可以包括步骤601至步骤606。
步骤601,解析图片文件,得到所述图片文件的图片头信息和帧头信息。
具体的,运行在终端设备中的解码装置解析图片文件,以获得所述图片文件的图片头信息和帧头信息。其中,所述图片头信息包括指示所述图片文件是否存在透明度数据的图像特征信息,通过确定是否包含透明度数据可以确定如何获取码流数据以及获取到的码流数据是否包含由透明度数据生成的第二码流数据。所述帧头信息用于指示所述图片文件的码流数据段,通过帧头信息可以确定能够获取到码流数据的码流数 据段,进而实现对码流数据的解码。举例来说,帧头信息包含帧头信息起始码,通过识别帧头信息起始码能够确定码流数据段。
在本申请一些实施例中,所述解码装置解析图片文件得到所述图片文件的图片头信息具体可以是:从图片文件的图片头信息数据段中读取所述图片文件的图片头信息。
在本申请一些实施例中,所述解码装置解析图片文件得到所述图片文件的帧头信息具体可以是:从图片文件的帧头信息数据段中读取所述图片文件的帧头信息。
需要说明的是,本申请实施例的图片头信息和帧头信息可以参考图5a、图5b、图5c、图6a、图6b、图7a、图7b、图8a、图8b和图8c的举例来说,在此不再赘述。
步骤602,读取所述图片文件中所述帧头信息指示的码流数据段中的码流数据。
具体的,若通过所述图像特征信息确定所述图片文件包含透明度数据,则所述解码装置读取所述图片文件中所述帧头信息指示的码流数据段中的码流数据。所述码流数据包括第一码流数据和第二码流数据。
在本申请一些实施例中,图片文件的一帧图像对应一个帧头信息,即该帧头信息可以用于指示包含第一码流数据和第二码流数据的码流数据段。具体的,针对图片文件为静态格式的图片文件的情况,静态格式的图片文件包含一帧图像,即为第一图像,因此,该静态格式的图片文件中包含一个帧头信息。针对图片文件为动态格式的图片文件的情况,动态格式的图片文件一般包含至少两帧图像,对于其中的每一帧图像均有一个帧头信息。若确定所述图片文件包含透明度数据,则所述解码装置根据所述帧头信息指示的码流数据段中读取所述第一码流数据和第二码流数据。
在本申请另外一些实施例中,图片文件的一帧图像中的一个码流数 据对应一个帧头信息,即一个帧头信息中指示的码流数据段中包含一个码流数据。具体的,针对静态格式的图片文件的情况,静态格式的图片文件包含一帧图像,即为第一图像,包含透明度数据的第一图像对应于两个码流数据,分别为第一码流数据和第二码流数据,因此,该静态格式的图片文件中第一码流数据对应一个帧头信息、第二码流数据对应另一个帧头信息。针对动态格式的图片文件的情况,动态格式的图片文件包含至少两帧图像,包含透明度数据的每一帧图像对应于两个码流数据,分别为第一码流数据和第二码流数据,并对每一帧图像的第一码流数据和第二码流数据各增加一个帧头信息。因此,若确定所述图片文件包含透明度数据,则所述解码装置根据两个帧头信息分别指示的两个码流数据段,分别获取第一码流数据和第二码流数据。
需要说明的是,对于图片文件的一帧图像中的一个码流数据对应一个帧头信息的情况,编码装置可以按照预设的顺序来排列第一码流数据对应的帧头信息数据段、第一码流数据段、第二码流数据对应的帧头信息数据段、和第二码流数据段,并且解码装置可以确定编码装置的排列顺序。举例来说,对于一帧图像的第一码流数据段、第二码流数据段和各个码流数据对应的帧头信息数据段,可以按照第一码流数据对应的帧头信息数据段、第一码流数据段、第二码流数据对应的帧头信息数据段、第二码流数据段进行排列,这样在解码装置解码的过程中,能够确定表示该帧图像的两个帧头信息和两个帧头指示的码流数据段中,哪一个能够获取到第一码流数据,哪一个能获取到第二码流数据。可以理解的是,这里的第一码流数据是指由RGB数据生成的码流数据,第二码流数据是指由透明度数据生成的码流数据。
步骤603,按照第一视频解码模式对第一码流数据进行解码,生成第一图像的RGB数据。
步骤604,按照第二视频解码模式对第二码流数据进行解码,生成所述第一图像的透明度数据。
步骤605,根据所述第一图像的所述RGB数据和所述透明度数据, 生成所述第一图像对应的RGBA数据。
其中,步骤603至步骤605可以参见图9和图10实施例中对应步骤的具体描述,在此不再赘述。
在本申请实施例中,在图片文件包含RGB数据和透明度数据的情况下,解码装置解析图片文件,得到图片文件的图片头信息和帧头信息,并读取图片文件中帧头信息指示的码流数据段中的码流数据;按照第一视频解码模式对表示每一帧图像中的第一码流数据进行解码,生成第一图像的RGB数据;按照第二视频解码模式对表示每一帧图像中的第二码流数据进行解码,生成第一图像的透明度数据;根据第一图像的RGB数据和透明度数据,生成第一图像对应的RGBA数据。通过对图片文件中的第一码流数据和第二码流数据分别进行解码,进而获得RGBA数据,实现了在采用视频编解码模式的同时保留了图片文件中的透明度数据,保证了图片文件的质量。
请参见图12,为本申请实施例提供的另一种图片文件处理方法的流程示意图,该方法可由前述计算设备执行。如图12所示,假设该计算设备为一终端设备,本申请实施例的所述方法可以包括步骤701至步骤705。
步骤701,生成图片文件对应的图片头信息和帧头信息。
具体的,运行在终端设备中的图片文件处理装置生成所述图片文件对应的图片头信息和帧头信息。其中,该图片文件可以为静态格式的图片文件,即仅包含该第一图像;或者,该图片文件为动态格式的图片文件,即包含所述第一图像以及其他图像。不论该图片文件是静态格式的图片文件还是动态格式的图片文件,所述图片文件处理装置都需要生成该图片文件对应的图片头信息。其中,所述图片头信息包括指示所述图片文件是否存在透明度数据的图像特征信息,以使解码装置通过所述图像特征信息确定所述图片文件是否包含透明度数据确定如何获取码流数据以及获取到的码流数据是否包含由透明度数据生成的第二码流数 据。
进一步的,所述帧头信息用于指示所述图片文件的码流数据段,以使解码装置通过帧头信息确定能够获取到码流数据的码流数据段,进而实现对码流数据的解码。举例来说,帧头信息包含帧头信息起始码,通过识别帧头信息起始码能够确定码流数据段。
步骤702,将所述图片头信息写入所述图片文件的图片头信息数据段中。
具体的,所述图片文件处理装置将所述图片头信息写入所述图片文件图片头信息数据段。
步骤703,将所述帧头信息写入所述图片文件的帧头信息数据段中。
具体的,所述图片文件处理装置将所述帧头信息写入所述图片文件的帧头信息数据段中。
步骤704,若根据图片头信息包括的图像特征信息确定所述图片文件中包含透明度数据,则对所述第一图像对应的RGBA数据中包含的RGB数据按照第一视频编码模式进行编码生成第一码流数据,以及对所述第一图像对应的RGBA数据中包含的透明度数据按照第二视频编码模式进行编码生成第二码流数据。
具体的,若确定所述图片文件中的第一图像包含透明度数据,则所述图片文件处理装置按照第一视频编码模式对所述第一图像对应的RGBA数据中包含的RGB数据进行编码生成第一码流数据,以及按照第二视频编码模式对所述第一图像对应的RGBA数据中包含的透明度数据进行编码生成第二码流数据。
在本申请一些实施例中,在所述图片文件处理装置获取到所述图片文件中的第一图像对应的RGBA数据之后,分离所述RGBA数据,以 得到所述第一图像的RGB数据和透明度数据,所述RGB数据为所述RGBA数据包含的颜色数据,所述透明度数据为所述RGBA数据包含的透明度数据。进一步实现对RGB数据和透明度数据的分别编码,具体的编码过程可以参见图1至图4d所示实施例中具体介绍,在此不再赘述。
步骤705,将所述第一码流数据和所述第二码流数据写入所述第一图像对应的帧头信息所指示的码流数据段中。
具体的,所述图片文件处理装置将所述第一码流数据和所述第二码流数据写入所述第一图像对应的帧头信息所指示的码流数据段中。
需要说明的是,本申请实施例的图片头信息和帧头信息可以参考图5a、图5b、图5c、图6a、图6b、图7a、图7b、图8a、图8b和图8c的举例来说,在此不再赘述。
又一需要说明的是,本申请实施例中编码之前输入的RGBA数据可以是通过对各种格式的图片文件解码获得的,其中图片文件的格式可以为JPEG、BMP、PNG、APNG、GIF等格式中的任一种,本申请实施例对编码之前的图片文件的格式不做限定。
在本申请实施例中,图片文件处理装置生成图片文件对应的图片头信息和帧头信息,通过图片头信息包含的指示图片文件是否存在透明度数据的图像特征信息,能够让解码装置确定如何获取码流数据以及获取到的码流数据是否包含由透明度数据生成的第二码流数据;通过帧头信息指示的图片文件的码流数据段,能够让解码装置获取到码流数据段中的码流数据,进而实现对码流数据的解码。
请参见图13,为本申请实施例提供的另一种图片文件处理方法的流程示意图,该方法可由前述的计算设备执行。如图13所示,假设该计算设备为一终端设备,本申请实施例的所述方法可以包括步骤801至步骤803。
步骤801,解析图片文件,得到所述图片文件的图片头信息和帧头 信息。
具体的,在终端设备中运行的图片文件处理装置解析图片文件,以获得所述图片文件的图片头信息和帧头信息。其中,所述图片头信息包括指示所述图片文件是否存在透明度数据的图像特征信息,通过确定所述图片文件是否包含透明度数据可以确定如何获取码流数据以及获取到的码流数据是否包含由透明度数据生成的第二码流数据。所述帧头信息用于指示所述图片文件的码流数据段,通过帧头信息可以确定能够获取到码流数据的码流数据段,进而实现对码流数据的解码。举例来说,帧头信息包含帧头信息起始码,通过识别帧头信息起始码能够确定码流数据段。
在本申请一些实施例中,所述图片文件处理装置解析图片文件得到所述图片文件的图片头信息具体可以是:从图片文件的图片头信息数据段中读取所述图片文件的图片头信息。
在本申请一些实施例中,所述图片文件处理装置解析图片文件得到所述图片文件的帧头信息具体可以是:从图片文件的帧头信息数据段中读取所述图片文件的帧头信息。
需要说明的是,本申请实施例的图片头信息和帧头信息可以参考图5a、图5b、图5c、图6a、图6b、图7a、图7b、图8a、图8b和图8c的举例来说,在此不再赘述。
步骤802,若通过所述图像特征信息确定所述图片文件包含透明度数据,则读取所述图片文件中所述帧头信息指示的码流数据段中的码流数据,所述码流数据包括第一码流数据和第二码流数据。
具体的,若通过所述图像特征信息确定所述图片文件包含透明度数据,则所述图片文件处理装置读取所述图片文件中所述帧头信息指示的码流数据段中的码流数据。所述码流数据包括第一码流数据和第二码流数据。
在本申请一些实施例中,图片文件的一帧图像对应一个帧头信息, 即该帧头信息可以用于指示包含第一码流数据和第二码流数据的码流数据段。具体的,针对图片文件为静态格式的图片文件的情况,静态格式的图片文件包含一帧图像,即为第一图像,因此,该静态格式的图片文件中包含一个帧头信息。针对图片文件为动态格式的图片文件的情况,动态格式的图片文件一般包含至少两帧图像,对于其中的每一帧图像均增加一个帧头信息。若确定所述图片文件包含透明度数据,则所述图片文件处理装置根据所述帧头信息指示的码流数据段中读取所述第一码流数据和第二码流数据。
在本申请另外一些实施例中,图片文件的一帧图像中的一个码流数据对应一个帧头信息,即一个帧头信息中指示的码流数据段中包含一个码流数据。具体的,针对静态格式的图片文件的情况,静态格式的图片文件包含一帧图像,即为第一图像,包含透明度数据的第一图像对应于两个码流数据,分别为第一码流数据和第二码流数据,因此,该静态格式的图片文件中第一码流数据对应一个帧头信息、第二码流数据对应另一个帧头信息。针对动态格式的图片文件的情况,动态格式的图片文件包含至少两帧图像,包含透明度数据的每一帧图像对应于两个码流数据,分别为第一码流数据和第二码流数据,并对每一帧图像的第一码流数据和第二码流数据各增加一个帧头信息。因此,若确定所述图片文件包含透明度数据,则所述图片文件处理装置根据两个帧头信息分别指示的两个码流数据段,分别获取第一码流数据和第二码流数据。
需要说明的是,对于图片文件的一帧图像中的一个码流数据对应一个帧头信息的情况,编码装置可以按照预设的顺序来排列第一码流数据对应的帧头信息数据段、第一码流数据段、第二码流数据对应的帧头信息数据段、和第二码流数据段,并且图片文件处理装置可以确定编码装置的排列顺序。举例来说,对于一帧图像的第一码流数据段、第二码流数据段和各个码流数据对应的帧头信息数据段,可以按照第一码流数据对应的帧头信息数据段、第一码流数据段、第二码流数据对应的帧头信息数据段、第二码流数据段进行排列,这样在图片文件处理装置解码的 过程中,能够确定表示该帧图像的两个帧头信息和两个帧头指示的码流数据段中,哪一个能够获取到第一码流数据,哪一个能获取到第二码流数据。可以理解的是,这里的第一码流数据是指由RGB数据生成的码流数据,第二码流数据是指由透明度数据生成的码流数据。
步骤803,对所述第一码流数据和所述第二码流数据分别进行解码。
具体的,在所述图片文件处理装置从码流数据段中获取到第一码流数据和第二码流数据之后,所述图片文件处理装置对第一码流数据和第二码流数据分别进行解码。
需要说明的是,所述图片文件处理装置可以参照图9至图11所示实施例中解码装置的执行过程实现对第一码流数据和第二码流数据的解码,在此不再赘述。
在本申请实施例中,图片文件处理装置对图片文件进行解析以得到图片头信息和帧头信息,通过图片头信息包含的指示图片文件是否存在透明度数据的图像特征信息,能够确定如何获取码流数据以及获取到的码流数据是否包含由透明度数据生成的第二码流数据;通过帧头信息指示的图片文件的码流数据段,获取到码流数据段中的码流数据,进而实现对码流数据的解码。
请参见图14a,为本申请实施例提供的一种编码装置的结构示意图。如图14a所示,本申请实施例的所述编码装置1可以包括:数据获取模块11、第一编码模块12、第二编码模块13和数据写入模块14。
数据获取模块11,用于获取图片文件中第一图像对应的RGBA数据,并分离所述RGBA数据,以得到所述第一图像的RGB数据和透明度数据,其中,所述RGB数据为所述RGBA数据包含的颜色数据,所述透明度数据为所述RGBA数据包含的透明度数据;
第一编码模块12,用于按照第一视频编码模式对所述第一图像的RGB数据进行编码,生成第一码流数据;
第二编码模块13,用于按照第二视频编码模式对所述第一图像的透 明度数据进行编码,生成第二码流数据;
数据写入模块14,用于将所述第一码流数据和所述第二码流数据写入所述图片文件的码流数据段中,所述第一图像为所述图片文件中包含的图像。
在本申请一些实施例中,如图14b所示,所述第一编码模块12包括第一数据转换单元121和第一码流生成单元122,其中:
第一数据转换单元121,用于将所述第一图像的RGB数据转换为第一YUV数据;
第一码流生成单元122,用于按照第一视频编码模式对所述第一YUV数据进行编码,生成第一码流数据。
在本申请一些实施例中,如图14c所示,所述第二编码模块13包括第二数据转换单元131和第二码流生成单元132,其中:
第二数据转换单元131,用于将所述第一图像的透明度数据转换为第二YUV数据;
第二码流生成单元132,用于按照第二视频编码模式对所述第二YUV数据进行编码,生成第二码流数据。
在本申请一些实施例中,所述第二数据转换单元131用于将所述第一图像的透明度数据设定为第二YUV数据中的Y分量,且不设定所述第二YUV数据中的U分量和V分量。或者,所述第二数据转换单元131用于将所述第一图像的透明度数据设定为第二YUV数据中的Y分量,且将所述第二YUV数据中的U分量和V分量设定为预设数据。
在本申请一些实施例中,所述数据获取模块11,用于若所述图片文件为动态格式的图片文件且所述第一图像为所述图片文件中的第k帧对应的图像,则判断所述第k帧是否为所述图片文件中的最后一帧,其中,k为大于0的正整数;若所述第k帧不是所述图片文件中的最后一帧,获取所述图片文件中的第(k+1)帧对应的第二图像所对应的RGBA数据,并分离所述第二图像所对应的RGBA数据,以得到所述第二图像的RGB数据和透明度数据;
所述第一编码模块12,还用于按照第三视频编码模式对所述第二图像的RGB数据进行编码,生成第三码流数据;
所述第二编码模块13,还用于按照第四视频编码模式对所述第二图像的透明度数据进行编码,生成第四码流数据;
所述数据写入模块14,还用于将所述第三码流数据和所述第四码流数据写入所述图片文件的码流数据段中。
在本申请一些实施例中,如图14d所示,所述编码装置1还包括:
信息生成模块15,用于生成所述图片文件对应的图片头信息和帧头信息,其中,所述图片头信息包括指示所述图片文件是否存在透明度数据的图像特征信息,所述帧头信息用于指示所述图片文件的码流数据段。
在本申请一些实施例中,所述数据写入模块13,还用于将所述信息生成模块15生成的所述图片头信息写入所述图片文件的图片头信息数据段中。
在本申请一些实施例中,所述数据写入模块13,还用于将所述信息生成模块15生成的所述帧头信息写入所述图片文件的帧头信息数据段中。
需要说明的是,本申请实施例所描述的编码装置1所执行的模块、单元及带来的有益效果可根据上述图1c至图8c所示方法实施例中的方法具体实现,此处不再赘述。
请参见图15,为本申请实施例提供的另一种编码装置的结构示意图。如图15所示,所述编码装置1000可以包括:至少一个处理器1001,例如CPU,至少一个网络接口1004,存储器1005,至少一个通信总线1002。网络接口1004可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。在本申请一些实施例中,存储器1005还可以是至少一个位于远离前述处理器1001的存储 装置。其中,通信总线1002用于实现这些组件之间的连接通信。在本申请一些实施例中,所述编码装置1000包括用户接口1003,其中,所述用户接口1003可以包括显示屏(Display)10031、键盘(Keyboard)10032。如图15所示,作为一种计算机可读存储介质的存储器1005中可以包括操作系统10051、网络通信模块10052、用户接口模块10053以及机器可读指令10054,所述机器可读指令10054中包括编码应用程序10055。
在图15所示的编码装置1000中,处理器1001可以用于调用存储器1005中存储的编码应用程序10055,并具体执行以下操作:
获取图片文件中第一图像对应的RGBA数据,并分离所述RGBA数据,以得到所述第一图像的RGB数据和透明度数据,所述RGB数据为所述RGBA数据包含的颜色数据,所述透明度数据为所述RGBA数据包含的透明度数据;
按照第一视频编码模式对所述第一图像的RGB数据进行编码,生成第一码流数据;
按照第二视频编码模式对所述第一图像的透明度数据进行编码,生成第二码流数据;
将所述第一码流数据和所述第二码流数据写入所述图片文件的码流数据段中。
在一个实施例中,所述处理器1001在执行按照第一视频编码模式对所述第一图像的RGB数据进行编码,生成第一码流数据时,具体执行:
将所述第一图像的RGB数据转换为第一YUV数据;按照第一视频编码模式对所述第一YUV数据进行编码,生成第一码流数据。
在一个实施例中,所述处理器1001在执行按照第二视频编码模式对所述第一图像的透明度数据进行编码,生成第二码流数据时,具体执行:
将所述第一图像的透明度数据转换为第二YUV数据;按照第二视 频编码模式对所述第二YUV数据进行编码,生成第二码流数据。
在一个实施例中,所述处理器1001在执行将所述第一图像的透明度数据转换为第二YUV数据时,具体执行:
将所述第一图像的透明度数据设定为第二YUV数据中的Y分量,且不设定所述第二YUV数据中的U分量和V分量;或者,
将所述第一图像的透明度数据设定为第二YUV数据中的Y分量,且将所述第二YUV数据中的U分量和V分量设定为预设数据。
在一个实施例中,所述处理器1001还执行以下步骤:
若所述图片文件为动态格式的图片文件且所述第一图像为所述图片文件中的第k帧对应的图像,则判断所述第k帧是否为所述图片文件中的最后一帧,其中,k为大于0的正整数;若所述第k帧不是所述图片文件中的最后一帧,获取所述图片文件中的第(k+1)帧对应的第二图像所对应的RGBA数据,并分离所述第二图像对应的RGBA数据,以得到所述第二图像的RGB数据和透明度数据;
按照第三视频编码模式对所述第二图像的RGB数据进行编码,生成第三码流数据;
按照第四视频编码模式对所述第二图像的透明度数据进行编码,生成第四码流数据;
将所述第三码流数据和所述第四码流数据写入所述图片文件的码流数据段中。
在一个实施例中,所述处理器1001还执行以下步骤:
生成所述图片文件对应的图片头信息和帧头信息,其中,所述图片头信息包括指示所述图片文件是否存在透明度数据的图像特征信息,所述帧头信息用于指示所述图片文件的码流数据段。
在一个实施例中,所述处理器1001还执行以下步骤:
将所述图片头信息写入所述图片文件的图片头信息数据段中。
在一个实施例中,所述处理器1001还执行以下步骤:
将所述帧头信息写入所述图片文件的帧头信息数据段中。
需要说明的是,本申请实施例所描述的处理器1001所执行的步骤及带来的有益效果可根据上述图1c至图8c所示方法实施例中的方法具体实现,此处不再赘述。
请参见图16a,为本申请实施例提供的一种解码装置的结构示意图。如图16a所示,本申请实施例的所述解码装置2可以包括:第一数据获取模块26、第一解码模块21、第二解码模块22和数据生成模块23。在本申请实施例中的所述第一码流数据和所述第二码流数据是从图片文件的码流数据段中读取的由所述第一图像生成的数据。
第一数据获取模块26,用于从图片文件的码流数据段中获取由所述图片文件中的第一图像生成的第一码流数据和第二码流数据;
第一解码模块21,用于按照第一视频解码模式对第一码流数据进行解码,生成第一图像的RGB数据;
第二解码模块22,用于按照第二视频解码模式对第二码流数据进行解码,生成所述第一图像的透明度数据;
数据生成模块23,用于根据所述第一图像的所述RGB数据和所述透明度数据,生成所述第一图像对应的RGBA数据。
在本申请一些实施例中,如图16b所示,所述第一解码模块21,包括第一数据生成单元211和第一数据转换单元212,其中:
第一数据生成单元211,用于按照第一视频解码模式对所述第一码流数据进行解码,生成第一图像的第一YUV数据;
第一数据转换单元212,用于将所述第一YUV数据转换为所述第一图像的RGB数据。
在本申请一些实施例中,如图16c所示,所述第二解码模块22,包括第二数据生成单元221和第二数据转换单元222,其中:
第二数据生成单元221,用于按照第二视频解码模式对所述第二码流数据进行解码,生成所述第一图像的第二YUV数据;
第二数据转换单元222,用于将所述第二YUV数据转换为所述第一 图像的透明度数据。
在本申请一些实施例中,所述第二数据转换单元222具体用于将所述第二YUV数据中的Y分量设定为所述第一图像的所述透明度数据,且舍弃所述第二YUV数据中的U分量和V分量。
在本申请一些实施例中,如图16d所示,所述解码装置2还包括:
第二数据获取模块24,用于若所述图片文件为动态格式的图片文件且所述第一图像为所述动态格式的图片文件中的第k帧对应的图像,则判断所述第k帧是否为所述图片文件中的最后一帧,其中,k为大于0的正整数;若所述第k帧不是所述图片文件中的最后一帧,从所述图片文件的码流数据段中获取由所述图片文件中第(k+1)帧对应的第二图像生成的第三码流数据和第四码流数据;
所述第一解码模块21,还用于按照第三视频解码模式对所述第三码流数据进行解码,生成所述第二图像的RGB数据;
所述第二解码模块22,还用于按照第四视频解码模式对所述第四码流数据进行解码,生成所述第二图像的透明度数据;
所述数据生成模块23,还用于根据所述第二图像的所述RGB数据和所述透明度数据,生成所述第二图像对应的RGBA数据。
在本申请一些实施例中,如图16e所示,所述解码装置2还包括文件解析模块25:
所述文件解析模块25,用于解析图片文件,得到所述图片文件的图片头信息和帧头信息,所述图片头信息包括指示所述图片文件是否存在透明度数据的图像特征信息,所述帧头信息用于指示所述图片文件的码流数据段。
在本申请一些实施例中,所述文件解析模块25具体用于从图片文件的图片头信息数据段中读取所述图片文件的图片头信息。
在本申请一些实施例中,所述文件解析模块25具体用于从图片文件的帧头信息数据段中读取所述图片文件的帧头信息。
在本申请一些实施例中,
所述第一数据获取模块26,用于若通过所述图像特征信息确定所述图片文件包含透明度数据,则读取所述图片文件中所述帧头信息指示的码流数据段中的码流数据,所述码流数据包括第一码流数据和第二码流数据。
需要说明的是,本申请实施例所描述的解码装置2所执行的模块、单元及带来的有益效果可根据上述图9至图11所示方法实施例中的方法具体实现,此处不再赘述。
请参见图17,为本申请实施例提供的另一种解码装置的结构示意图。如图17所示,所述解码装置2000可以包括:至少一个处理器2001,例如CPU,至少一个网络接口2004,存储器2005,至少一个通信总线2002。网络接口2004可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器2005可以是高速RAM存储器,也可以是非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。存储器2005还可以是至少一个位于远离前述处理器2001的存储装置。其中,通信总线2002用于实现这些组件之间的连接通信。在本申请一些实施例中,所述解码装置2000包括用户接口2003,其中,所述用户接口2003可以包括显示屏(Display)20031、键盘(Keyboard)20032。如图17所示,作为一种计算机可读存储介质的存储器2005中可以包括操作系统20051、网络通信模块20052、用户接口模块20053以及机器可读指令20054,所述机器可读指令20054包括解码应用程序20055。
在图17所示的解码装置2000中,处理器2001可以用于调用存储器2005中存储的解码应用程序20055,并具体执行以下操作:
从图片文件的码流数据段中获取由所述图片文件中的第一图像生成的第一码流数据和第二码流数据;
按照第一视频解码模式对第一码流数据进行解码,生成第一图像的RGB数据;
按照第二视频解码模式对第二码流数据进行解码,生成所述第一图 像的透明度数据;
根据所述第一图像的所述RGB数据和所述透明度数据,生成所述第一图像对应的RGBA数据;
所述第一码流数据和所述第二码流数据是从图片文件的码流数据段中读取的由所述第一图像生成的数据。
在一个实施例中,所述处理器2001在执行按照第一视频解码模式对第一码流数据进行解码,生成第一图像的RGB数据时,具体执行:
按照第一视频解码模式对所述第一码流数据进行解码,生成第一图像的第一YUV数据;将所述第一YUV数据转换为所述第一图像的RGB数据。
在一个实施例中,所述处理器2001在执行按照第二视频解码模式对第二码流数据进行解码,生成所述第一图像的透明度数据时,具体执行:
按照第二视频解码模式对所述第二码流数据进行解码,生成所述第一图像的第二YUV数据;将所述第二YUV数据转换为所述第一图像的透明度数据。
在一个实施例中,所述处理器2001在执行将所述第二YUV数据转换为所述第一图像的透明度数据时,具体执行:
将所述第二YUV数据中的Y分量设定为所述第一图像的所述透明度数据,且舍弃所述第二YUV数据中的U分量和V分量。
在一个实施例中,所述处理器2001还执行以下步骤:
若所述图片文件为动态格式的图片文件且所述第一图像为所述动态格式的图片文件中的第k帧对应的图像,则判断所述第k帧是否为所述图片文件中的最后一帧,其中,k为大于0的正整数;若所述第k帧不是所述图片文件中的最后一帧,从所述图片文件的码流数据段中获取由所述图片文件中第(k+1)帧对应的第二图像生成的第三码流数据和第四码流数据;
按照第三视频解码模式对所述第三码流数据进行解码,生成所述第 二图像的RGB数据;
按照第四视频解码模式对所述第四码流数据进行解码,生成所述第二图像的透明度数据;
根据所述第二图像的所述RGB数据和所述透明度数据,生成所述第二图像对应的RGBA数据。
在一个实施例中,所述处理器2001在执行按照第一视频解码模式对第一码流数据进行解码,生成第一图像的RGB数据之前,还执行以下步骤:
解析图片文件,得到所述图片文件的图片头信息和帧头信息,所述图片头信息包括指示所述图片文件是否存在透明度数据的图像特征信息,所述帧头信息用于指示所述图片文件的码流数据段。
在一个实施例中,所述处理器2001在执行解析图片文件,得到所述图片文件的图片头信息时,具体执行:
从图片文件的图片头信息数据段中读取所述图片文件的图片头信息。
在一个实施例中,所述处理器2001在执行解析图片文件,得到所述图片文件的帧头信息时,具体执行:
从图片文件的帧头信息数据段中读取所述图片文件的帧头信息。
在一个实施例中,所述处理器2001还执行以下步骤:若通过所述图像特征信息确定所述图片文件包含透明度数据,则读取所述图片文件中所述帧头信息指示的码流数据段中的码流数据,所述码流数据包括第一码流数据和第二码流数据。
需要说明的是,本申请实施例所描述的处理器2001所执行的步骤及带来的有益效果可根据上述图9至图11所示方法实施例中的方法具体实现,此处不再赘述。
请参见图18,为本申请实施例提供的一种图片文件处理装置的结构示意图。如图18所示,本申请实施例的所述图片文件处理装置3可以 包括:信息生成模块31。在本申请一些实施例中,所述图片文件处理装置3还可以包括第一信息写入模块32、第二信息写入模块33、数据编码模块34和数据写入模块35中的至少一个。
信息生成模块31,用于生成图片文件对应的图片头信息和帧头信息,所述图片头信息包括指示所述图片文件是否存在透明度数据的图像特征信息,所述帧头信息用于指示所述图片文件的码流数据段。
在本申请一些实施例中,所述图片文件处理装置3还包括:
第一信息写入模块32,用于将所述图片头信息写入所述图片文件的图片头信息数据段中。
所述图片文件处理装置3还包括第二信息写入模块33:
所述第二信息写入模块33,用于将所述帧头信息写入所述图片文件的帧头信息数据段中。
所述图片文件处理装置3还包括数据编码模块34和数据写入模块35:
所述数据编码模块34,若根据所述图像特征信息确定所述图片文件包含透明度数据,则对所述图片文件中包含的第一图像对应的RGBA数据中包含的RGB数据进行编码生成第一码流数据,以及包含的透明度度数据进行编码生成第二码流数据;
所述数据写入模块35,将所述第一码流数据和所述第二码流数据写入所述第一图像对应的帧头信息所指示的码流数据段中。
需要说明的是,本申请实施例所描述的图片文件处理装置3所执行的模块及带来的有益效果可根据上述图12所示方法实施例中的方法具体实现,此处不再赘述。
请参见图19,为本申请实施例提供的另一种图片文件处理装置的结构示意图。如图19所示,所述图片文件处理装置3000可以包括:至少一个处理器3001,例如CPU,至少一个网络接口3004,存储器3005,至少一个通信总线3002。网络接口3004可以包括标准的有线接口、无 线接口(如WI-FI接口)。存储器3005可以是高速RAM存储器,也可以是非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。存储器3005还可以是至少一个位于远离前述处理器3001的存储装置。其中,通信总线3002用于实现这些组件之间的连接通信。
在本申请一些实施例中,所述图片文件处理装置3000包括用户接口3003,其中,所述用户接口3003可以包括显示屏(Display)30031、键盘(Keyboard)30032。如图19所示,作为一种计算机可读存储介质的存储器3005中可以包括操作系统30051、网络通信模块30052、用户接口模块30053以及机器可读指令30054,所述机器可读指令30054包含图片文件处理应用程序30055。
在图19所示的图片文件处理装置3000中,处理器3001可以用于调用存储器3005中存储的图片文件处理应用程序30055,并具体执行以下操作:
生成图片文件对应的图片头信息和帧头信息,所述图片头信息包括指示所述图片文件是否存在透明度数据的图像特征信息,所述帧头信息用于指示所述图片文件的码流数据段。
在一个实施例中,所述处理器3001还执行以下步骤:
将所述图片头信息写入所述图片文件的图片头信息数据段中。
在一个实施例中,所述处理器3001还执行以下步骤:
将所述帧头信息写入所述图片文件的帧头信息数据段中。
在一个实施例中,所述处理器3001还执行以下步骤:
若根据所述图像特征信息确定所述图片文件包含透明度数据,则对所述图片文件中包含的第一图像对应的RGBA数据中包含的RGB数据进行编码生成第一码流数据,以及包含的透明度度数据进行编码生成第二码流数据;
将所述第一码流数据和所述第二码流数据写入所述第一图像对应的帧头信息所指示的码流数据段中。
需要说明的是,本申请实施例所描述的处理器3001所执行的步骤 及带来的有益效果可根据上述图12所示方法实施例中的方法具体实现,此处不再赘述。
请参见图20,为本申请实施例提供的一种图片文件处理装置的结构示意图。如图20所示,本申请实施例的所述图片文件处理装置4可以包括:文件解析模块41。在本申请一些实施例中,所述图片文件处理装置4还可以包括数据读取模块42和数据解码模块43中的至少一个。
文件解析模块41,用于解析图片文件,得到所述图片文件的图片头信息和帧头信息,所述图片头信息包括指示所述图片文件是否存在透明度数据的图像特征信息,所述帧头信息用于指示所述图片文件的码流数据段。
在本申请一些实施例中,所述文件解析模块41具体用于从图片文件的图片头信息数据段中读取所述图片文件的图片头信息。
在本申请一些实施例中,所述文件解析模块41具体用于从图片文件的帧头信息数据段中读取所述图片文件的帧头信息。
在本申请一些实施例中,所述图片文件处理装置4还包括数据读取模块42和数据解码模块43,其中:
所述数据读取模块42,用于若通过所述图像特征信息确定所述图片文件包含透明度数据,则读取所述图片文件中所述帧头信息指示的码流数据段中的码流数据,所述码流数据包括第一码流数据和第二码流数据。
所述数据解码模块43,用于对所述第一码流数据和所述第二码流数据分别进行解码。
需要说明的是,本申请实施例所描述的图片文件处理装置4所执行的模块及带来的有益效果可根据上述图13所示方法实施例中的方法具体实现,此处不再赘述。
请参见图21,为本申请实施例提供的另一种图片文件处理装置的结构示意图。如图21所示,所述图片文件处理装置4000可以包括:至少 一个处理器4001,例如CPU,至少一个网络接口4004,存储器4005,至少一个通信总线4002。网络接口4004可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器4005可以是高速RAM存储器,也可以是非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。存储器4005还可以是至少一个位于远离前述处理器4001的存储装置。其中,通信总线4002用于实现这些组件之间的连接通信。在本申请一些实施例中,所述图片文件处理装置4000包括用户接口4003,其中,所述用户接口4003可以包括显示屏(Display)40031、键盘(Keyboard)40032。如图21所示,作为一种计算机可读存储介质的存储器4005中可以包括操作系统40051、网络通信模块40052、用户接口模块40053以及机器可读指令40054,所述机器可读指令40054中包含图片文件处理应用程序40055。
在图21所示的图片文件处理装置4000中,处理器4001可以用于调用存储器4005中存储的图片文件处理应用程序40055,并具体执行以下操作:
解析图片文件,得到所述图片文件的图片头信息和帧头信息,所述图片头信息包括指示所述图片文件是否存在透明度数据的图像特征信息,所述帧头信息用于指示所述图片文件的码流数据段。
在一个实施例中,所述处理器4001在执行解析图片文件,得到所述图片文件的图片头信息时,具体执行:
从图片文件的图片头信息数据段中读取所述图片文件的图片头信息。
在一个实施例中,所述处理器4001在执行解析图片文件,得到所述图片文件的帧头信息时,具体执行:
从图片文件的帧头信息数据段中读取所述图片文件的帧头信息。
在一个实施例中,所述处理器4001还执行以下步骤:
若通过所述图像特征信息确定所述图片文件包含透明度数据,则读取所述图片文件中所述帧头信息指示的码流数据段中的码流数据,所述 码流数据包括第一码流数据和第二码流数据;
对所述第一码流数据和所述第二码流数据分别进行解码。
需要说明的是,本申请实施例所描述的处理器4001所执行的步骤及带来的有益效果可根据上述图13所示方法实施例中的方法具体实现,此处不再赘述。
请参见图22,为本申请实施例提供的一种图片文件处理系统的系统架构图。如图22所示,该图片文件处理系统5000包括编码设备5001和解码设备5002。
在本申请一些实施例中,编码设备5001可以是图1c至图8c所示的编码装置,或者也可以包含具有实现图1c至图8c所示的编码装置功能的编码模块的终端设备;相应的,所述解码设备5002可以是图9至图11所示的解码装置,或者,也可以包含具有实现图9至图11所示的解码装置功能的解码模块的终端设备。
在本申请另外一些实施例中,编码设备5001可以是图12所示的图片文件处理装置,或者也可以包含具有实现图12所示的图片文件处理装置功能的图片文件处理模块;相应的,解码设备5002可以是图13所示的图片文件处理装置,或者也可以包含具有实现图13所示的图片文件处理装置的图片文件处理模块。
本申请实施例中涉及的编码装置、解码装置、图片文件处理装置、终端设备可以包括平板电脑、手机、电子阅读器、个人计算机(Personal Computer,PC)、笔记本电脑、车载设备、网络电视、可穿戴设备等设备,本申请实施例对此不做限定。
进一步的,结合图23和图24对本申请实施例涉及到的编码设备5001和解码设备5002进行具体介绍。图23和图24是从功能逻辑的角度更完整地展示了以上所示方法可能涉及到的其他方面,以方便读者进一步理解本申请记载的技术方案。请一并参见图23,为本申请实施例提 供的一种编码模块的示例图。所述编码设备5001可以包括图23所示的编码模块6000,而编码模块6000可以包括:RGB数据和透明度数据分离子模块6001、第一视频编码模式子模块6002、第二视频编码模式子模块6003以及图片头信息、帧头信息封装子模块6004。其中,RGB数据和透明度数据分离子模块6001用于将图片源格式中RGBA数据分离为RGB数据和透明度数据。第一视频编码模式子模块6002用于实现对RGB数据的编码以生成第一码流数据。第二视频编码模式子模块6003用于实现对透明度数据的编码以生成第二码流数据。图片头信息、帧头信息封装子模块6004用于生成包括第一码流数据和第二码流数据在内的码流数据的图片头信息和帧头信息以输出压缩图像数据。
具体实现中,对于静态格式的图片文件而言,首先,编码模块6000接收输入的该图片文件的RGBA数据,通过RGB数据和透明度数据分离子模块6001将RGBA数据划分为RGB数据和透明度数据;接着,第一视频编码模式子模块6002按照第一视频编码模式对RGB数据进行编码,生成第一码流数据;再接着,第二视频编码模式子模块6003按照第二视频编码模式对透明度数据进行编码,生成第二码流数据;接着,图片头信息、帧头信息封装子模块6004生成该图片文件的图片头信息和帧头信息,将第一码流数据、第二码流数据、帧头信息、图片头信息写入对应的数据段中,进而生成该RGBA数据对应的压缩图像数据。
对于动态格式的图片文件而言,首先,编码模块6000确定包含的帧数;接着,将每一帧的RGBA数据通过RGB数据和透明度数据分离子模块6001划分为RGB数据和透明度数据,第一视频编码模式子模块6002按照第一视频编码模式对RGB数据进行编码,生成第一码流数据,以及第二视频编码模式子模块6003按照第二视频编码模式对透明度数据进行编码,生成第二码流数据,图片头信息、帧头信息封装子模块6004生成每一帧对应的帧头信息,将各个码流数据和帧头信息写入对应的数据段;最后,图片头信息、帧头信息封装子模块6004生成该图片文件的图片头信息,并将图片头信息写入对应的数据段,进而生成该RGBA 数据对应的压缩图像数据。
在本申请一些实施例中,压缩图像数据也可以采用压缩码流、图像序列等名称来描述,本申请实施例对此不做限定。
请一并参见图24,为本申请实施例提供的一种解码模块的示例图。所述解码设备5002可以包括图24所示的解码模块7000,所述解码模块7000可以包括:图片头信息、帧头信息解析子模块7001、第一视频解码模式子模块7002、第二视频解码模式子模块7003以及RGB数据和透明度数据合并子模块7004。其中,图片头信息、帧头信息解析子模块7001用于对图片文件的压缩图像数据进行解析,以确定图片头信息和帧头信息,该压缩图像数据是通过图23所示的编码模块完成编码之后得到的数据。第一视频解码模式子模块7002用于实现对第一码流数据的解码,其中,第一码流数据是由RGB数据生成的。第二视频解码模式子模块7003用于实现对第二码流数据的解码,其中,第二码流数据是由透明度数据生成的。RGB数据和透明度数据合并子模块7004用于将RGB数据和透明度数据合并为RGBA数据,以输出RGBA数据。
具体实现中,对于静态格式的图片文件而言,首先,解码模块7000通过图片头信息、帧头信息解析子模块7001解析图片文件的压缩图像数据,得到图片文件的图片头信息和帧头信息,若根据图片头信息确定图片文件存在透明度数据,则从帧头信息指示的码流数据段获取第一码流数据和第二码流数据;接着,第一视频解码模式子模块7002按照第一视频解码模式对第一码流数据进行解码,生成RGB数据;再接着,第二视频解码模式子模块7003按照第二视频解码模式对第二码流数据进行解码,生成透明度数据;最后,RGB数据和透明度数据合并子模块7004将RGB数据和透明度数据进行合并,生成RGBA数据,并将RGBA数据输出。
对于动态格式的图片文件而言,首先,解码模块7000通过图片头信息、帧头信息解析子模块7001解析图片文件的压缩图像数据,得到图片文件的图片头信息和帧头信息,确定图片文件包含的帧数;接着, 若根据图片头信息确定图片文件存在透明度数据,则从每一帧图像的帧头信息指示的码流数据段获取第一码流数据和第二码流数据,第一视频解码模式子模块7002按照第一视频解码模式对每一帧图像对应的第一码流数据进行解码,生成RGB数据,以及第二视频解码模式子模块7003按照第二视频解码模式对每一帧图像对应的第二码流数据进行解码,生成透明度数据;最后,RGB数据和透明度数据合并子模块7004将每一帧图像的RGB数据和透明度数据进行合并,生成RGBA数据,并将该压缩图像数据包含的全部帧的RGBA数据输出。
针对图22所示的图片文件处理系统,举例来说,编码设备5001可以将源格式的图片文件按照图23所示的编码模块进行编码并生成压缩图像数据,并将编码之后的压缩图像数据传输至解码设备5002,解码设备5002接收到该压缩图像数据之后,按照图24所示的解码模块进行解码,以得到该图片文件对应的RGBA数据。其中,源格式的图片文件可以包括但不限定于jpeg、png、gif等。
请参见图25,为本申请实施例提供的一种终端设备的结构示意图。如图25所示,所述终端设备8000包括编码模块和解码模块。在本申请一些实施例中,编码模块可以是具有实现图1c至图8c所示的编码装置功能的编码模块;相应的,所述解码模块可以是具有实现图9至图11所示的解码装置功能的解码模块。在本申请一些实施例中,编码模块可以按照图23所述的编码模块6000实现编码,解码模块可以按照图24所示的解码模块7000实现解码。具体的实现过程可以参考对应实施例的具体介绍,在此不再赘述。这样,在一个终端设备可以实现对jpeg、png、gif等源格式的图片文件进行编码,以形成新格式的图片文件,这样,通过采用视频编码模式编码能够提高图片文件的压缩率,减小图片文件的大小,因此可以提升图片加载速度,节省网络传输带宽以及存储成本;另外,通过对图片文件中的RGB数据和透明度数据分别进行编码,实现了在采用视频编码模式的同时保留了图片文件中的透明度数据, 保证了图片文件的质量。在该终端设备还能够对新格式的图片文件进行解码得到相应的RGBA数据,实现了采用视频解码模式解码获得RGB数据和透明度数据,保证了图片文件的质量。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在被处理器执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。
以上所揭露的仅为本申请较佳实施例而已,当然不能以此来限定本申请之权利范围,因此依本申请权利要求所作的等同变化,仍属本申请所涵盖的范围。

Claims (21)

  1. 一种图片文件处理方法,应用于一计算设备,包括:
    从图片文件的码流数据段中获取由所述图片文件中的第一图像生成的第一码流数据和第二码流数据;
    按照第一视频解码模式对所述第一码流数据进行解码,生成所述第一图像的RGB数据;
    按照第二视频解码模式对所述第二码流数据进行解码,生成所述第一图像的透明度数据;
    根据所述第一图像的RGB数据和透明度数据,生成所述第一图像对应的RGBA数据。
  2. 根据权利要求1所述的方法,所述按照第一视频解码模式对所述第一码流数据进行解码,生成所述第一图像的RGB数据,包括:
    按照所述第一视频解码模式对所述第一码流数据进行解码,生成所述第一图像的第一YUV数据;
    将所述第一YUV数据转换为所述第一图像的RGB数据。
  3. 根据权利要求1所述的方法,所述按照第二视频解码模式对所述第二码流数据进行解码,生成所述第一图像的透明度数据,包括:
    按照所述第二视频解码模式对所述第二码流数据进行解码,生成所述第一图像的第二YUV数据;
    将所述第二YUV数据转换为所述第一图像的透明度数据。
  4. 根据权利要求3所述的方法,所述将所述第二YUV数据转换为所述第一图像的透明度数据,包括:
    将所述第二YUV数据中的Y分量设定为所述第一图像的所述透明度数据,且舍弃所述第二YUV数据中的U分量和V分量。
  5. 根据权利要求1所述的方法,还包括:
    若所述图片文件为动态格式的图片文件且所述第一图像为所述图片文件中的第k帧对应的图像,则判断所述第k帧是否为所述图片文件中的最后一帧,其中,k为大于0的正整数;
    若所述第k帧不是所述图片文件中的最后一帧,从所述图片文件的码流数据段中获取由所述图片文件中第(k+1)帧对应的第二图像生成的第三码流数据和第四码流数据;
    按照第三视频解码模式对所述第三码流数据进行解码,生成所述第二图像的RGB数据;
    按照第四视频解码模式对所述第四码流数据进行解码,生成所述第二图像的透明度数据;
    根据所述第二图像的RGB数据和透明度数据,生成所述第二图像对应的RGBA数据。
  6. 根据权利要求1-5任一项所述的方法,所述从图片文件的码流数据段中获取由所述图片文件中的第一图像生成的第一码流数据和第二码流数据之前,还包括:
    解析所述图片文件,得到所述图片文件的图片头信息和帧头信息,其中,所述图片头信息包含指示所述图片文件是否存在透明度数据的图像特征信息,所述帧头信息用于指示所述图片文件的码流数据段。
  7. 根据权利要求6所述的方法,所述解析图片文件,得到所述图片文件的图片头信息,包括:
    从图片文件的图片头信息数据段中读取所述图片文件的图片头信息;
    所述图片头信息包括图像文件标识符、解码器标识符、版本号和所述图像特征信息;所述图像文件标识符用于表示所述图片文件的类型,所述解码器标识符用于表示所述图片文件采用的编解码标准的标识;所述版本号用于表示所述图片文件采用的编解码标准的档次。
  8. 根据权利要求7所述的方法,所述图像特征信息还包括所述图像特征信息起始码、所述图像特征信息数据段长度、所述图片文件是否为静态格式的图片文件、所述图片文件是否为动态格式的图片文件、所述图片文件是否为无损编码、所述图片文件采用的YUV颜色空间值域、所述图片文件的宽度、所述图片文件的高度和用于指示若所述图片文件 为动态格式的图片文件的帧数。
  9. 根据权利要求6所述的方法,所述解析图片文件,得到所述图片文件的帧头信息,包括:
    从图片文件的帧头信息数据段中读取所述图片文件的帧头信息;
    所述帧头信息包括所述帧头信息起始码和用于指示若所述图片文件为动态格式的图片文件的延迟时间信息。
  10. 根据权利要求9所述的方法,所述从图片文件的码流数据段中获取由所述图片文件中的第一图像生成的第一码流数据和第二码流数据,包括:
    若通过所述图像特征信息确定所述图片文件包含透明度数据,则读取所述帧头信息指示的码流数据段中的码流数据,其中,所述码流数据包括所述第一码流数据和所述第二码流数据。
  11. 一种图片文件处理装置,包括:
    处理器以及与所述处理器连接的存储器,所述存储器中存储有可由所述处理器执行的机器可读指令,所述处理器执行所述机器可读指令完成以下操作:
    从图片文件的码流数据段中获取由所述图片文件中的第一图像生成的第一码流数据和第二码流数据;
    按照第一视频解码模式对所述第一码流数据进行解码,生成所述第一图像的RGB数据;
    按照第二视频解码模式对所述第二码流数据进行解码,生成所述第一图像的透明度数据;
    根据所述第一图像的RGB数据和透明度数据,生成所述第一图像对应的RGBA数据。
  12. 根据权利要求11所述的装置,所述按照第一视频解码模式对所述第一码流数据进行解码,生成所述第一图像的RGB数据,包括:
    按照所述第一视频解码模式对所述第一码流数据进行解码,生成所述第一图像的第一YUV数据;
    将所述第一YUV数据转换为所述第一图像的RGB数据。
  13. 根据权利要求11所述的装置,所述按照第二视频解码模式对所述第二码流数据进行解码,生成所述第一图像的透明度数据,包括:
    按照所述第二视频解码模式对所述第二码流数据进行解码,生成所述第一图像的第二YUV数据;
    将所述第二YUV数据转换为所述第一图像的透明度数据。
  14. 根据权利要求13所述的装置,所述将所述第二YUV数据转换为所述第一图像的透明度数据,包括:
    将所述第二YUV数据中的Y分量设定为所述第一图像的所述透明度数据,且舍弃所述第二YUV数据中的U分量和V分量。
  15. 根据权利要求11所述的装置,所述处理器执行所述机器可读指令完成以下操作:
    若所述图片文件为动态格式的图片文件且所述第一图像为所述图片文件中的第k帧对应的图像,则判断所述第k帧是否为所述图片文件中的最后一帧,其中,k为大于0的正整数;
    若所述第k帧不是所述图片文件中的最后一帧,从所述图片文件的码流数据段中获取由所述图片文件中第(k+1)帧对应的第二图像生成的第三码流数据和第四码流数据;
    按照第三视频解码模式对所述第三码流数据进行解码,生成所述第二图像的RGB数据;
    按照第四视频解码模式对所述第四码流数据进行解码,生成所述第二图像的透明度数据;
    根据所述第二图像的RGB数据和透明度数据,生成所述第二图像对应的RGBA数据。
  16. 根据权利要求11-15任一项所述的装置,所述从图片文件的码流数据段中获取由所述图片文件中的第一图像生成的第一码流数据和第二码流数据之前,还包括:
    解析所述图片文件,得到所述图片文件的图片头信息和帧头信息, 其中,所述图片头信息包括指示所述图片文件是否存在透明度数据的图像特征信息,所述帧头信息用于指示所述图片文件的码流数据段。
  17. 根据权利要求16所述的装置,所述解析图片文件,得到所述图片文件的图片头信息,包括:
    从图片文件的图片头信息数据段中读取所述图片文件的图片头信息;
    所述图片头信息包括图像文件标识符、解码器标识符、版本号和所述图像特征信息;所述图像文件标识符用于表示所述图片文件的类型,所述解码器标识符用于表示所述图片文件采用的编解码标准的标识;所述版本号用于表示所述图片文件采用的编解码标准的档次。
  18. 根据权利要求17所述的装置,所述图像特征信息还包括所述图像特征信息起始码、所述图像特征信息数据段长度、所述图片文件是否为静态格式的图片文件、所述图片文件是否为动态格式的图片文件、所述图片文件是否为无损编码、所述图片文件采用的YUV颜色空间值域、所述图片文件的宽度、所述图片文件的高度和用于指示若所述图片文件为动态格式的图片文件的帧数。
  19. 根据权利要求16所述的装置,所述解析图片文件,得到所述图片文件的帧头信息,包括:
    从图片文件的帧头信息数据段中读取所述图片文件的帧头信息;
    所述帧头信息包括所述帧头信息起始码和用于指示若所述图片文件为动态格式的图片文件的延迟时间信息。
  20. 根据权利要求19所述的装置,所述从图片文件的码流数据段中获取由所述图片文件中的第一图像生成的第一码流数据和第二码流数据,包括:
    若通过所述图像特征信息确定所述图片文件包含透明度数据,则读取所述帧头信息指示的码流数据段中的码流数据,其中,所述码流数据包括所述第一码流数据和所述第二码流数据。
  21. 一种非易失性计算机可读存储介质,所述存储介质中存储有机 器可读指令,所述机器可读指令用于使处理器执行如权利要求1至10任一项所述的方法。
PCT/CN2018/079442 2017-04-08 2018-03-19 一种图片文件处理方法、装置及存储介质 WO2018184464A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710225913.7 2017-04-08
CN201710225913.7A CN107071516B (zh) 2017-04-08 2017-04-08 一种图片文件处理方法

Publications (1)

Publication Number Publication Date
WO2018184464A1 true WO2018184464A1 (zh) 2018-10-11

Family

ID=59602473

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/079442 WO2018184464A1 (zh) 2017-04-08 2018-03-19 一种图片文件处理方法、装置及存储介质

Country Status (3)

Country Link
CN (2) CN109040789B (zh)
TW (1) TWI672939B (zh)
WO (1) WO2018184464A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109040789B (zh) * 2017-04-08 2021-05-28 腾讯科技(深圳)有限公司 一种图片文件处理方法
CN108322722B (zh) * 2018-01-24 2020-01-21 阿里巴巴集团控股有限公司 基于增强现实的图像处理方法、装置及电子设备
CN109309868B (zh) * 2018-08-19 2019-06-18 上海极链网络科技有限公司 视频文件配置解析系统
EP3734973B1 (en) * 2019-05-02 2023-07-05 Sick IVP AB Method and encoder relating to encoding of pixel values to accomplish lossless compression of a digital image
CN112070867A (zh) * 2019-06-11 2020-12-11 腾讯科技(深圳)有限公司 动画文件处理方法、装置、计算机可读存储介质和计算机设备
CN113994708A (zh) * 2020-05-28 2022-01-28 深圳市大疆创新科技有限公司 编码方法、解码方法、装置及系统
EP4231640A1 (en) * 2022-02-16 2023-08-23 Beijing Xiaomi Mobile Software Co., Ltd. Encoding/decoding video picture data
WO2023210594A1 (ja) * 2022-04-27 2023-11-02 ヌヴォトンテクノロジージャパン株式会社 画像符号化装置及び画像符号化方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540901A (zh) * 2008-03-20 2009-09-23 华为技术有限公司 编解码方法及装置
CN101742317A (zh) * 2009-12-31 2010-06-16 北京中科大洋科技发展股份有限公司 一种带阿尔法透明通道的视频压缩编码方法
CN102036059A (zh) * 2009-09-25 2011-04-27 腾讯科技(深圳)有限公司 一种透明图像的压缩和解压缩方法、装置和系统
CN104333762A (zh) * 2014-11-24 2015-02-04 成都瑞博慧窗信息技术有限公司 一种视频解码方法
US20150074735A1 (en) * 2013-09-06 2015-03-12 Seespace Ltd. Method and Apparatus for Rendering Video Content Including Secondary Digital Content
CN106375759A (zh) * 2016-08-31 2017-02-01 深圳超多维科技有限公司 一种视频图像数据的编、解码方法及装置
CN107071516A (zh) * 2017-04-08 2017-08-18 腾讯科技(深圳)有限公司 一种图片文件处理方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8189908B2 (en) * 2005-09-02 2012-05-29 Adobe Systems, Inc. System and method for compressing video data and alpha channel data using a single stream
US8681170B2 (en) * 2011-05-05 2014-03-25 Ati Technologies Ulc Apparatus and method for multi-streaming for more than three pixel component values
US8655086B1 (en) * 2011-11-01 2014-02-18 Zynga, Inc. Image compression with alpha channel data
CN102724582B (zh) * 2012-05-31 2014-09-24 福州瑞芯微电子有限公司 基于用户界面对关键色进行显示的方法
CN102724471A (zh) * 2012-06-11 2012-10-10 宇龙计算机通信科技(深圳)有限公司 图片和视频的转换方法和装置
KR20160026005A (ko) * 2014-08-29 2016-03-09 (주) 디아이지 커뮤니케이션 알파 채널을 포함하는 증강 현실 동영상의 압축 장치 및 방법
CN104980798B (zh) * 2015-07-14 2018-04-10 天脉聚源(北京)教育科技有限公司 一种远端视频显示方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540901A (zh) * 2008-03-20 2009-09-23 华为技术有限公司 编解码方法及装置
CN102036059A (zh) * 2009-09-25 2011-04-27 腾讯科技(深圳)有限公司 一种透明图像的压缩和解压缩方法、装置和系统
CN101742317A (zh) * 2009-12-31 2010-06-16 北京中科大洋科技发展股份有限公司 一种带阿尔法透明通道的视频压缩编码方法
US20150074735A1 (en) * 2013-09-06 2015-03-12 Seespace Ltd. Method and Apparatus for Rendering Video Content Including Secondary Digital Content
CN104333762A (zh) * 2014-11-24 2015-02-04 成都瑞博慧窗信息技术有限公司 一种视频解码方法
CN106375759A (zh) * 2016-08-31 2017-02-01 深圳超多维科技有限公司 一种视频图像数据的编、解码方法及装置
CN107071516A (zh) * 2017-04-08 2017-08-18 腾讯科技(深圳)有限公司 一种图片文件处理方法

Also Published As

Publication number Publication date
CN109040789A (zh) 2018-12-18
CN107071516B (zh) 2018-12-21
TW201838409A (zh) 2018-10-16
TWI672939B (zh) 2019-09-21
CN107071516A (zh) 2017-08-18
CN109040789B (zh) 2021-05-28

Similar Documents

Publication Publication Date Title
WO2018184458A1 (zh) 一种图片文件处理方法、装置及存储介质
US11012489B2 (en) Picture file processing method, picture file processing device, and storage medium
WO2018184464A1 (zh) 一种图片文件处理方法、装置及存储介质
TWI707309B (zh) 圖片文件處理方法、系統及儲存介質
JP6703032B2 (ja) 後方互換性拡張画像フォーマット
JP7053722B2 (ja) ビットストリーム内で、ldrピクチャのピクチャ/ビデオ・フォーマットと、このldrピクチャおよびイルミネーション・ピクチャから取得された復号済みのhdrピクチャのピクチャ/ビデオ・フォーマットとをシグナリングする方法および装置
WO2017063168A1 (zh) 图像编码方法、装置以及图像处理设备
CN114079823A (zh) 基于Flutter的视频渲染方法、装置、设备及介质
CN109905715A (zh) 插入sei数据的码流转换方法及系统
CN110087072A (zh) 图像处理装置
TWI835238B (zh) 影像處理方法及影像處理裝置
WO2023138491A1 (zh) 一种图像采集方法、图像显示方法及装置
TWI835236B (zh) 影像處理方法及影像處理裝置
TW202408219A (zh) 影像處理方法及影像處理裝置
CN117221740A (zh) 一种图片的处理方法及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18780340

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18780340

Country of ref document: EP

Kind code of ref document: A1