CN108668169B - Image information processing method and device, and storage medium - Google Patents

Image information processing method and device, and storage medium Download PDF

Info

Publication number
CN108668169B
CN108668169B CN201810559099.7A CN201810559099A CN108668169B CN 108668169 B CN108668169 B CN 108668169B CN 201810559099 A CN201810559099 A CN 201810559099A CN 108668169 B CN108668169 B CN 108668169B
Authority
CN
China
Prior art keywords
image
component
yuva
images
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810559099.7A
Other languages
Chinese (zh)
Other versions
CN108668169A (en
Inventor
荆锐
任军
赵代平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201810559099.7A priority Critical patent/CN108668169B/en
Priority to CN202111165241.8A priority patent/CN113766319A/en
Publication of CN108668169A publication Critical patent/CN108668169A/en
Application granted granted Critical
Publication of CN108668169B publication Critical patent/CN108668169B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Color Television Systems (AREA)
  • Color Image Communication Systems (AREA)

Abstract

The embodiment of the invention provides an image information processing method and device and a storage medium. The image information processing method includes: determining a key image and a common image except the key image in a YUVA image to be compressed; wherein the YUVA image comprises: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal to 1/2 the data amount of the a component; and according to the key image and the common image, performing video compression on the YUVA image and obtaining a compressed file of the YUVA image.

Description

Image information processing method and device, and storage medium
Technical Field
The present invention relates to the field of information technologies, and in particular, to an image information processing method and apparatus, and a storage medium.
Background
The sequence frame is an image sequence formed by a plurality of images in sequence; the sequential frame animation refers to a playing technique for playing images one by one in sequence.
A file of a sequential frame animation generally includes a plurality of images. If these multiple images are directly transmitted, the amount of data to be transmitted is large, and the occupied transmission bandwidth is large.
In order to reduce the amount of data, image files are usually compressed, but the amount of data after such compression is still large and the compression rate is low.
Disclosure of Invention
In view of the above, embodiments of the present invention are directed to a method and an apparatus for processing image information, and a storage medium.
The technical scheme of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image information processing method, including:
determining a key image and a common image except the key image in a YUVA image to be compressed; wherein the YUVA image comprises: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal to 1/2 the data amount of the a component;
and according to the key image and the common image, performing video compression on the YUVA image and obtaining a compressed file of the YUVA image.
Optionally, the determining a key picture in a YUVA picture to be compressed and a normal picture other than the key picture includes:
determining a calculation component, wherein the calculation component is: one or more of the Y component, the V component, and the U component, or the calculation component is: one or more of the Y component, the V component, and the conversion component of the U component;
determining a difference calculation weight according to the component A;
obtaining an nth calculation value of the nth YUVA image and an n +1 th calculation value of the n +1 th YUVA image based on the extracted color difference components and the calculation weight, wherein n is a positive integer;
carrying out difference calculation on the nth calculation value and the n +1 th calculation value;
and determining a key image and the common image according to the difference calculation result.
Optionally, the determining a key image and the common image according to the result of the difference calculation includes:
if the calculated value n +1 is outside a preset range compared with the difference calculation result of the calculated value n, the n +1 YUVA image is the key image;
and/or the presence of a gas in the gas,
if the difference calculation result of the n +1 th calculated value compared with the n-th calculated value is within the preset range, the n +1 th YUVA image is the normal image.
Optionally, the determining a difference calculation weight according to the component a includes:
and carrying out normalization processing on the component A to obtain the weight value calculated by the difference.
Optionally, the method further comprises:
converting an original RGBA image into the YUVA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
wherein the compressed data amount of the YUVA image is smaller than the compressed data amount of the RGBA image.
Optionally, the determining a key picture in a YUVA picture to be compressed and a normal picture other than the key picture includes:
extracting a color component in the RGBA image, wherein the color component comprises: the R component, the G component, and the B component;
determining a difference calculation weight according to the component A;
obtaining an nth calculation value of the nth RGBA image and an nth +1 calculation value of the n +1 RGBA images based on the extracted color difference components and the calculation weight, wherein n is a positive integer;
carrying out difference calculation on the nth calculation value and the n +1 th calculation value;
and determining a key image and the common image according to the difference calculation result.
Optionally, the plurality of YUVA images are image frames of a sequential frame animation.
Optionally, the video compressing the YUVA image according to the key image and the normal image and obtaining a compressed file of the YUVA image includes:
and performing video compression on the YUVA image by adopting a VP9 coding mode according to the key image and the common image.
In a second aspect, the present invention provides an image information processing method, including:
receiving a compressed file obtained by video compressing a YUVA image based on a key image and a common image, wherein the YUVA image comprises: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal to 1/2 the data amount of the a component;
restoring the YUVA image carried in the compressed file;
and playing the sequence frame animation according to the playing sequence of the YUVA image.
Optionally, the method further comprises:
converting the YUVA image to an RGBA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component.
Optionally, the restoring the YUVA image carried in the compressed file includes:
restoring a YUVA image in the compressed file before playing the sequence frame animation;
storing image data, wherein the image data is the YUVA image or the converted RGBA image of the YUVA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
the playing of the sequence frame animation according to the playing sequence of the YUVA image comprises the following steps:
reading the image data when the sequence frame animation is played;
and sequentially playing the sequence frame animation according to the playing sequence of the YUVA images.
Optionally, the method further comprises:
deleting the first and second blank components in the YUVA image prior to storing the image data.
Optionally, the method further comprises:
if the current load rate is less than the preset load rate, compressing the YUVA image or the image data without the first blank component and the second blank component to obtain compressed data;
the storing the image data includes:
and storing the compressed data.
Optionally, if the current load rate is less than a preset load rate, compressing the YUVA image or the image data from which the first blank component and the second blank component are removed to obtain compressed data, including:
and if the current load rate is less than the preset load rate, compressing the YUVA image or the image data without the first blank component or the second blank component by using a JPEG (joint photographic experts group) compression technology.
Optionally, the playing the sequence frame animation according to the playing order of the YUVA images includes:
determining whether there is compressed data of the YUVA image;
if the compressed data exist, decompressing the compressed data to restore the YUVA image;
alternatively, the first and second electrodes may be,
and if the compressed data does not exist, reading the YUVA image or restoring the image data from which the first blank component and the second blank component are removed.
In a third aspect, an embodiment of the present invention provides an image information compression apparatus, including:
the device comprises a determining module, a compressing module and a compressing module, wherein the determining module is used for determining a key image in a YUVA image to be compressed and a common image except the key image; wherein the YUVA image comprises: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal to 1/2 the data amount of the a component;
the first compression module is used for carrying out video compression on the YUVA image according to a key image and the common image and obtaining a compressed file of the YUVA image.
Optionally, the determining module includes:
a first determining unit, configured to determine a calculation component, where the calculation component is: one or more of the Y component, the V component, and the U component, or the calculation component is: one or more of the Y component, the V component, and the conversion component of the U component;
the second determining unit is used for determining a difference calculation weight according to the component A;
a first calculating unit, configured to obtain an nth calculated value of the nth YUVA image and an n +1 th calculated value of the n +1 th YUVA image based on the extracted color difference components and the calculated weights, where n is a positive integer;
a second calculation unit configured to perform difference calculation on the nth calculation value and the (n + 1) th calculation value;
and the third determining unit is used for determining the key image and the common image according to the difference calculation result.
Optionally, the third determining unit is specifically configured to determine that the n +1 th YUVA image is the key image if a difference calculation result of the n +1 th calculated value compared to the nth calculated value is outside a preset range; and/or, if the difference calculation result of the n +1 th calculated value compared with the n-th calculated value is within the preset range, the n +1 th YUVA image is the normal image.
Optionally, the second determining unit is specifically configured to perform normalization processing on the component a to obtain the difference calculation weight.
Optionally, the apparatus further comprises:
a first conversion module configured to convert an original RGBA image into the yuba image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
wherein the compressed data amount of the YUVA image is smaller than the compressed data amount of the RGBA image.
Optionally, the determining module is specifically configured to extract a color component in the RGBA image, where the color component includes: the R component, the G component, and the B component; determining a difference calculation weight according to the component A; obtaining an nth calculation value of the nth RGBA image and an nth +1 calculation value of the n +1 RGBA images based on the extracted color difference components and the calculation weight, wherein n is a positive integer; carrying out difference calculation on the nth calculation value and the n +1 th calculation value; and determining a key image and the common image according to the difference calculation result.
Optionally, the plurality of YUVA images are image frames of a sequential frame animation.
Optionally, the first compression module is specifically configured to perform video compression on the YUVA image by using a VP9 coding scheme according to the key image and the normal image.
In a fourth aspect, an embodiment of the present invention provides an image information processing apparatus, including:
a receiving module, configured to receive a compressed file obtained by video compressing a yuba image based on a key image and a common image, where the yuba image includes: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal to 1/2 the data amount of the a component;
the restoration module is used for restoring the YUVA image carried in the compressed file;
and the playing module is used for playing the sequence frame animation according to the playing sequence of the YUVA image.
Optionally, the apparatus further comprises:
a second conversion module configured to convert the yuba image into an RGBA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component.
Optionally, the restoring module is specifically configured to restore the YUVA image in the compressed file before playing the sequence frame animation; storing image data, wherein the image data is the YUVA image or the converted RGBA image of the YUVA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
the playing module is specifically used for reading the image data when the sequence frame animation is played; and sequentially playing the sequence frame animation according to the playing sequence of the YUVA images.
Optionally, the apparatus further comprises:
a deletion module to delete the first and second blank components in the YUVA image prior to storing the image data.
Optionally, the apparatus further comprises:
a second compression module, configured to compress the YUVA image or the image data from which the first blank component and the second blank component are removed to obtain compressed data, if the current load rate is less than a preset load rate;
and the storage module is specifically used for storing the compressed data.
Optionally, the second compression module is specifically configured to compress the YUVA image or the image data from which the first blank component or the second blank component is removed by using a JPEG compression technique if the current load rate is smaller than the preset load rate.
Optionally, the playing module is specifically configured to determine whether compressed data of the YUVA image exists; if the compressed data exist, decompressing the compressed data to restore the YUVA image; or, if there is no compressed data, reading the YUVA image or restoring the image data from which the first blank component and the second blank component are removed.
In a fifth aspect, embodiments of the present invention provide a computer storage medium having stored thereon computer-executable instructions; after being executed, the computer-executable instructions can implement the method provided by any one of the technical solutions of the first aspect or the second aspect.
In a sixth aspect, an embodiment of the present invention provides a computer program product, which includes computer-executable instructions; after being executed, the computer-executable instructions can implement the method provided by any one of the technical solutions of the first aspect or the second aspect.
In a seventh aspect, an embodiment of the present invention provides an electronic device, including:
a memory;
and the processor is connected with the memory and used for realizing the method provided by any one of the technical schemes of the first aspect or the second aspect by executing the computer-executable instructions on the memory.
The embodiment of the invention provides a processing method for compressing and decompressing multiple YUVA images. The YUVA image is an image that carries a transparency component (i.e., an image that can be locally transparent).
In the first aspect, the video compression technology is used for compressing a plurality of YUVA images, so that the inter-image compression among the YUVA images is realized, and the compression amount can be greatly reduced for the intra-image compression of independent image files.
In the second aspect, the key images and the normal images are determined in the compression process, so that the compression is not limited to finding out a common part of all YUVA images to be compressed, and the maximum compression between different groups of images can be performed based on different key images, thereby reducing the data volume again.
In a third aspect, by introducing the blank component in this embodiment, the U component and the V component, plus the corresponding blank component, may be aligned with the Y component and the a component, on one hand, video compression may be performed, and on the other hand, a decompression end may normally decompress the YUVA image, thereby implementing image compression with transparency, thereby solving the problem in the prior art that the compression rate is low when the sequence frame animation includes an image frame with transparency, reducing network resources consumed in the image information transmission process, and reducing storage resources occupied in the storage process.
Drawings
Fig. 1 is a schematic flowchart of a first image information processing method according to an embodiment of the present invention;
fig. 2 is an equivalent schematic diagram of components of a YUVA image according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of determining a key image and a normal image according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a second image information processing method according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an image information processing apparatus according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a third method for processing image information according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of another image information processing apparatus according to an embodiment of the present invention;
FIG. 8 is a flowchart illustrating a fourth image information processing method according to an embodiment of the present invention;
FIG. 9 is a flowchart illustrating a fifth method for processing image information according to an embodiment of the present invention;
FIG. 10 is a flowchart illustrating a sixth image information processing method according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail with reference to the drawings and the specific embodiments of the specification.
As shown in fig. 1, the present embodiment provides an image information processing method including:
step S110: determining a key image and a common image except the key image in a YUVA image to be compressed; wherein the YUVA image comprises: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal to 1/2 the data amount of the a component;
step S120: and according to the key image and the common image, performing video compression on the YUVA image and obtaining a compressed file of the YUVA image.
The image information processing method provided in some embodiments may be applied to a compression end, which may also be a sending end, and may be a server or a terminal device providing the image information. The server can be a cloud server or a server group applied to the network. The terminal device can be various electronic devices, such as a mobile phone, a wearable device, a virtual reality device or an augmented reality device.
The YUVA image to be compressed in step S110 may be a plurality of independent YUVA images having content relevance. In this embodiment, since the YUVA image has a content relevance, the content relevance may be expressed as: the two YUVA images played back and forth have similarity, and the similarity can be described by similarity or difference. In the implementation, video compression is required by utilizing the similarity, so that a plurality of independent YUVA image files are converted into videos for compression, and the data volume can be reduced as much as possible by utilizing a video compression technology, so that the characteristics of less storage resources required for storing the compressed files, small flow consumed for transmitting the compressed files and small occupied bandwidth are achieved.
With the video compression technique in the present embodiment, on the one hand, inter-image compression between adjacent YUVA images can be achieved, and thus, for example, redundant data removal of the same portion between YUVA images can be achieved. Meanwhile, a key image and a normal image are determined in the implementation, wherein the normal image is an image except the key image; the key image here may be an image of which the difference degree of the previous image is greater than a preset value, for example, if the difference degree between the s +1 th image and the s th image is greater than the preset value (i.e., the similarity is less than a specific value), the s +1 th image may be considered as the key image; an image having a degree of difference from the key image smaller than a predetermined value may be referred to as a normal image of the key image. In the embodiment, through the distinction between the key image and the common image, on the other hand, a plurality of YUVA images can be divided into a plurality of groups of images to be compressed according to the similarity; this maximizes the removal of redundant data for each group relative to data compression in which all YUVA images are treated as a group for the same portion; therefore, data volume compression can be carried out through inter-image compression, and meanwhile, the compression efficiency of the inter-image compression can be improved as much as possible through distinguishing the key images and the common images.
In this embodiment, the YUVA image is a completely new image, and the YUVA image includes an a component, a first blank component, and a second blank component in addition to the original Y component, U component, and V component.
In this embodiment, the Y component and the a component may include: component values of W x H pixels, said W being the number of columns of pixels in one said YUVA image; the H is the number of rows of pixels in one of the YUVA images. For example, the Y component may include: luminance values of W × H pixels; the a component may include: the transparency values of W x H pixels.
In this embodiment, the U component and the V component each include: w H/4 pixel chrominance values. Of course, the number of pixel rows corresponding to the U component and the V component is different.
In image processing, the Y component and the a component attribute one component to one first channel, and thus, the data size corresponding to one first channel is: component values of W x H pixels.
The U component and the V component may be regarded as components of a second channel, and if the second channel only includes the U component and the Y component, it is obvious that the corresponding data amount of this channel is: the component values of W × H/2 pixels are less than the data amount of the first channel, so that an image playing error can be caused due to mismatch of the data amounts of the first channel and the second channel when the image is displayed. Therefore, in the present embodiment, a first blank component and a second blank component are also introduced; the first blank component and the second blank component can both be components with the value of 0. The sum of the data quantities of the U component and the first blank component is W x H/2 pixel component values; the sum of the data amounts of the V component and the second blank component is W × H/2 pixel component values. Thus, the sum of the U component, the V component, the first blank component and the second blank component is equal to the component values of W × H pixels. At this time, if the component of the second channel includes a U component, a V component, a first blank component, and a second blank component, the problem of an image presentation error caused by a difference in data amount included in the first channel and the second channel is solved. Therefore, a yuba image can not only include a luminance component and a chrominance component, but also include a transparency component, and if the yuba image is decoded and displayed, an image with own transparency can be directly output, without needing an additional gray-scale image to embody the transparency as in the prior art, thereby reducing the number of images and reducing the data amount generated by the number of images.
Fig. 2 is a schematic diagram illustrating the effects of the components of a YUVA image, and the implementations are many and not limited to any of the above.
Alternatively, as shown in fig. 3, the step S110 may include:
step S111: determining a calculation component, wherein the calculation component is: one or more of the Y component, the V component, and the U component, or the calculation component is: one or more of the Y component, the V component, and the conversion component of the U component;
step S112: determining a difference calculation weight according to the component A;
step S113: obtaining an nth calculation value of the nth YUVA image and an n +1 th calculation value of the n +1 th YUVA image based on the extracted color difference components and the calculation weight, wherein n is a positive integer;
step S114: carrying out difference calculation on the nth calculation value and the n +1 th calculation value;
step S115: and determining a key image and the common image according to the difference calculation result.
In this embodiment, first, a calculation component is determined, where the calculation component may be one or more of a Y component, a U component, and a V component in a YUVA image; or one or more of the Y, U, and V components, which may be one or more of the R, B, and G components.
For a channel with m bits, if the value of the A component is 0, the channel is completely transparent; if the value is' 2m-1 "represents opaque; if the value is from 0 to 2mTranslucent is indicated between-1 ". For example, for an 8-bit channel, if the value of the a component is "0", it indicates full transparency; if the value is 255, the opacity is represented; and if the value is between 0 and 255, the semi-transparency is represented.
It is to be noted that, in the embodiment of the present invention, the a component is introduced to determine the weight, and as for the channel with 8 bits, if the component value "0" of the a component indicates full transparency, the corresponding pixel has a color at this time, and the color will not be presented, so the difference calculation weight is set based on the a component in the embodiment, and thus, the effect level that two images are finally presented to the user can be determined by combining the images of the a component to the color component to calculate the similarity or the difference, so that the difference of the images is performed as much as possible.
In this embodiment, if only one component is selected as the calculation component, the Y component may be selected; the Y component can more finely represent the characteristics of the image relative to the U component and/or the V component, if the Y component is selected as a single calculation component, the characteristics of the image can be accurately reflected, and because only one component is introduced to participate in calculation, the calculation amount can be reduced, particularly the calculation amount comprising a large number of pixels is greatly saved.
In some embodiments, a rough estimate of the degree of dissimilarity may be made for both images when selecting one or more of the image data as the calculated components. The estimated difference of the two images is obtained, for example, by a down-sampling or predictive algorithm. The estimated differences may include: at least one of a transparency difference and a color difference is estimated. For example, the down-sampling may include: the individual component values of the pixel are 1/2 sampled, 1/5 sampled or 1/10 sampled. The predictive algorithm may include: mean algorithm, median algorithm and extremum algorithm.
For example, the method may comprise:
and acquiring transparency difference between the images, and if the transparency difference is greater than a transparency difference threshold, selecting more than one component as the calculation component.
In some embodiments, before determining the color difference between the two images, the a components of the two images may be matched, and according to that the transparency difference between the a components of the two images is greater than the transparency difference threshold, at least one of the two images may be directly considered as a key image, and if the transparency difference threshold between the a components of the two images is less than the transparency difference threshold, the two color differences are compared. If the a components of the two images are different, the transparent areas that may be presented to the user are different, and the presentation effect is very different, so in this embodiment, it may be determined, based on the comparison of the a components, whether to select one component or multiple components in one image data for the subsequent distinction between the key image frame and the common image, so that the calculation amount may be greatly reduced.
The estimated transparency difference being greater than the estimated transparency difference threshold may comprise at least one of:
if the two images are overlapped, the distance between the full transparent areas of the two images is larger than the preset distance;
if the two images overlap, the fully transparent regions of the two images are separated by at least one translucent or opaque image subregion.
In order to realize the quick comparison of the A component, the A of partial pixels can be extracted from each image subarea of the two images in a down-sampling mode to be used as a representative for comparison, so that the transparency difference between the two images can be estimated.
In some embodiments, the estimated color difference may be obtained by comparing the color difference of the two images, and if the estimated color difference is small, the estimated transparency difference of the two images may be determined according to the a component.
In some embodiments, the method further comprises:
if the estimated transparency difference between the images is less than the transparency difference threshold, the step of selecting one or more components as calculation components based on the estimated color difference between the two images is carried out.
The selecting one or more components as the calculation components based on the color difference of the two images may include:
obtaining estimated color difference between the images, and if the estimated color difference is larger than a color difference threshold value, selecting two components or three components as calculation components to participate in differential calculation; if the difference is smaller than the color difference threshold, only one component can be selected as a calculation component for difference calculation. The image may be a YUVA image or an RGBA image corresponding to the YUVA image, or the like
Determining the estimated color difference may comprise at least one of;
the mean of the pixel values of the two images is compared,
comparing the median of the pixel values of the two images;
comparing the maximum pixel values of the two images;
comparing the minimum pixel values of the two images;
and according to one or more of the comparison results of the average value, the median value, the maximum pixel value and the minimum pixel value, whether the color difference degree of the two corresponding images is large enough or not is judged. For example, if the difference between the two averages is greater than the predetermined value of the average, the estimated color difference between the two images is considered to be large, otherwise, the estimated color difference between the two images is considered to be small. For another example, if the difference between the two median values is greater than the predetermined median value, the estimated color difference between the two images is considered to be large, otherwise, the estimated color difference between the two images is considered to be small. For example, the comparison between the minimum pixel value and the maximum pixel value is combined to determine whether the color difference degree of the image is large enough; for example, if the comparison value of the minimum pixel value and the comparison value of the maximum pixel value are both greater than the corresponding predetermined values, it can be determined whether the color difference degree of the image is sufficiently large, otherwise it can be determined whether the color difference degree of the image is sufficiently small. If the estimated color difference of the images is large, at least one of the two images is a key image.
In this embodiment, the difference calculation may be performed by comparing pixels of corresponding coordinates in two images one by one to obtain a result of the difference calculation. For example, a Y component of a pixel at the Y-th coordinate on the x-th line of the nth YUVA image and a weight corresponding to the a component of the pixel are calculated to obtain a comparison value of the pixel, where the comparison value is one of the above-mentioned n-th calculated values. Calculating a Y component of a pixel of the Y-th coordinate on the x-th row of the (n + 1) -th YUVA image and a weight corresponding to the a component of the pixel to obtain a comparison value of the pixel, where the comparison value is one of the aforementioned n + 1-th calculation values, and the difference between the two comparison values is small, for example, in a given interval, the pixel values of the two pixels are considered to be equal, so as to count the number of pixels with equal pixel values in the two YUVA images, and the number of pixels can be used as a parameter indicating the difference between the two YUVA images, or calculate a ratio between the number of pixels with equal pixel values and the total pixels in one YUVA image, and the similarity between the two YUVA images can be reflected by the ratio.
In some embodiments, the key image and the normal image may be determined by down-sampling in order to reduce the amount of computation. The sampling frequency at which the calculated value is calculated may be higher than the sampling frequency at which the estimated difference is made.
Alternatively, the step S115 may include:
if the calculated value n +1 is outside a preset range compared with the difference calculation result of the calculated value n, the n +1 YUVA image is the key image;
and/or the presence of a gas in the gas,
if the difference calculation result of the n +1 th calculated value compared with the n-th calculated value is within the preset range, the n +1 th YUVA image is the normal image.
The result of the difference calculation may be the number of pixels other than the number of pixels having the same pixel value as described above, or may be a ratio of the number of pixels obtained by subtracting the number of pixels having the same pixel value from the total number of pixels of one YUVA image. For example, the comparison values of all pixels involved in the calculation may be averaged, and if the average value is within the range of the average value, the similarity between the two images may be considered to be high, and if the average value is outside the range of the average value, the difference between the two images may be considered to be high, and one of the two images is a key image. In short, the result of the difference calculation is within a preset range, which indicates that the difference degree between the two YUVA images is large, and the n +1 th YUVA image is a key image with large difference from the previous YUVA image, otherwise, the n +1 th YUVA image may be a normal image.
Optionally, a value range of the component a is: 0 to 2mWherein m is the number of bits of a color channel; the step S112 may include: and carrying out normalization processing on the component A to obtain the weight value calculated by the difference.
In the normalization process, the denominator of normalization may be 2mThus, the value range of the difference calculation weight corresponding to the component a obtained by calculation is as follows: 0 to 1.
In some embodiments, the component a may be directly calculated as the difference calculation weight, indicating that if the component a is directly calculated as the difference calculation weight, the calculation difficulty may be increased due to the large data size involved, and thus the normalized value obtained after the normalization processing of the component a is performed in the above embodiments is used as the difference calculation weight.
Optionally, as shown in fig. 4, the method further includes:
step S100: converting an original RGBA image into a YUVA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
wherein the compressed data amount of the YUVA image is smaller than the compressed data amount of the RGBA image.
In the embodiment, the YUVA image is converted from an RGBA image, that is, the RGBA image is an original image; and the YUVA image is a converted image.
When conversion from the YUVA image to the RGBA image is carried out, the R component, the B component and the G component in the RGBA image are used for converting into a Y component, a U component and a V component in the YUVA image; and the a component in the RGBA image is directly assigned to the a component in the YUVA image. The R component, B component, and G component are used for conversion into a Y component, U component, and V component in a YUVA image, which can be referred to in the conversion of an RGB image into a YUV image of the related art.
The image compression is carried out after the RGBA image is converted into the YUVA image, compared with the compressed data obtained by directly compressing the RGBA, the data size is smaller, so that the compression amplitude is larger, and the data size of the compressed file is reduced again.
In this embodiment, the YUVA image is converted from the RGBA image, and in order to improve processing efficiency, the RGBA image is converted into the YUVA image, and at the same time, difference calculation is directly performed on the RGBA image, and then a key image and a normal image in the YUVA image are determined according to a corresponding relationship between the RGBA image and the YUVA image. Therefore, the conversion of the image types can be synchronously realized, the key image and the common image can be synchronously determined, and the compression delay is reduced. For example, a first thread is used for converting an RGBA image into a YUVA image, and a second thread is used for distinguishing a key image from a common image; the first thread and the second thread may both comprise one or more threads; but the first thread and the second thread are different, so that the rapid compression of the compressed file can be promoted through parallel computing.
Therefore, in some embodiments, the step S111 may include: extracting one or more of the R component, the G component, and the B component in the RGBA image;
the step S112 may include: determining a difference calculation weight according to the component A;
the step S113 may include: obtaining an nth calculation value of the nth RGBA image and an nth +1 calculation value of the n +1 RGBA images based on the extracted color difference components and the calculation weight, wherein n is a positive integer;
the step S114 may include: carrying out difference calculation on the nth calculation value and the n +1 th calculation value;
the step S115 may include: and determining a key image and the common image according to the difference calculation result.
In this embodiment, specifically, whether one or more of the R component, the G component, and the B component are selected may be determined according to the color difference. If the color difference is small, the difference calculation can be performed by only one color component.
Is calculated differentially, thereby reducing the amount of calculation. In the present embodiment, the difference calculation is performed by combining the a component with the one color difference component, so that not only the color component but also the a component are taken into consideration.
Performing the difference calculation in step S114 may include: and subtracting the nth calculation value and the n +1 th calculation value to obtain a difference calculation result. Comparing the result of the difference calculation with
And further, the plurality of YUVA images are YUVA images of a sequential frame animation, or the RGBA images are YUVA images of a sequential frame animation. If the image is a YUVA image of a sequence animation, the YUVA images have larger content relevance, so that the data volume can be greatly compressed by utilizing the compression technology, and the storage resource and/or bandwidth resource occupied by the data volume can be reduced.
In addition, in the embodiment of the present invention, the YUVA image and the RGBA image may be at least partially transparent images, so that the data amount may be compressed to the maximum extent by using the above compression method, and unnecessary waste of storage resources and transmission resources may be reduced.
Optionally, the step S120 may include:
and performing video compression on the YUVA image by adopting a VP9 coding mode according to the key image and the common image.
In this embodiment, a VP9 encoding method is adopted to perform video compression on the YUVA image, so that a Webm video compression file can be obtained, the data size of the file and the time required for compression can be well balanced by the compression file, and the encoding rate is increased while the data size is ensured to be reduced as much as possible.
As shown in fig. 5, the present embodiment provides an image information compressing apparatus, which can be applied to a compression end, and may include:
a determining module 110, configured to determine a key image in a YUVA image to be compressed and a normal image other than the key image; wherein the YUVA image comprises: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal to 1/2 the data amount of the a component;
a first compression module 120, configured to perform video compression on the YUVA image according to the key image and the normal image, and obtain a compressed file of the YUVA image.
The determining module 110 and the first compressing module 120 may be program modules, and when executed by a processor, the program modules may implement one or more of the implementations described above to provide the image information processing method.
Optionally, the determining module 110 includes:
a first determining unit, configured to determine a calculation component, where the calculation component is: one or more of the Y component, the V component, and the U component, or the calculation component is: one or more of the Y component, the V component, and the conversion component of the U component;
the second determining unit is used for determining a difference calculation weight according to the component A;
a first calculating unit, configured to obtain an nth calculated value of the nth YUVA image and an n +1 th calculated value of the n +1 th YUVA image based on the extracted color difference components and the calculated weights, where n is a positive integer;
a second calculation unit configured to perform difference calculation on the nth calculation value and the (n + 1) th calculation value;
and the third determining unit is used for determining the key image and the common image according to the difference calculation result.
In this embodiment, the combination of the component a affects the similarity calculation of pixels with different color values, so the difference calculation weight needs to be determined according to component a.
In other implementations, the third determining unit is specifically configured to determine that the n +1 th YUVA image is the key image if a difference between the n +1 th calculated value and the nth calculated value is outside a preset range; and/or, if the difference calculation result of the n +1 th calculated value compared with the n-th calculated value is within the preset range, the n +1 th YUVA image is the normal image.
Optionally, the second determining unit is specifically configured to perform normalization processing on the component a to obtain the difference calculation weight.
Further, the apparatus further comprises:
a first conversion module configured to convert an original RGBA image into the yuba image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
wherein the compressed data amount of the YUVA image is smaller than the compressed data amount of the RGBA image.
Optionally, the determining module 110 is specifically configured to extract a color component in the RGBA image, where the color component includes: the R component, the G component, and the B component; determining a difference calculation weight according to the component A; obtaining an nth calculation value of the nth RGBA image and an nth +1 calculation value of the n +1 RGBA images based on the extracted color difference components and the calculation weight, wherein n is a positive integer; carrying out difference calculation on the nth calculation value and the n +1 th calculation value; and determining a key image and the common image according to the difference calculation result.
Further, optionally, the plurality of YUVA images are image frames of a sequential frame animation.
In other embodiments, the first compression module is specifically configured to perform video compression on the YUVA image by using a VP9 coding scheme according to the key image and the normal image.
As shown in fig. 6, the present embodiment provides an image information processing method including:
step S210: receiving a compressed file obtained by video compressing a YUVA image based on a key image and a common image, wherein the YUVA image comprises: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal to 1/2 the data amount of the a component;
step S220: restoring the YUVA image carried in the compressed file;
step S230: and playing the sequence frame animation according to the playing sequence of the YUVA image.
The method provided by this embodiment can be applied to a decompression end, which can be a receiving end of the compressed file, but is not limited to the receiving end, and in a specific case, the decompression end can also be a compression end of the compressed file.
The compressed file of the present embodiment is a compressed YUVA image, and the YUVA image here can be referred to the foregoing embodiments, and will not be repeated here.
In addition, the compressed file received in the present embodiment is video compression based on key images and general images. Therefore, the received data volume is small, and the space occupied by storage is small if the receiving end stores the data.
In this embodiment, YUVA images carried in a compressed file are restored, and meanwhile, a playing order of each YUVA image can be obtained based on information such as a header of a video file, and the YUVA images or converted images of the YUVA images can be directly played based on the playing order, so that a playing effect of a sequential frame animation is achieved.
Optionally, the method further comprises:
converting the YUVA image to an RGBA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component.
In this embodiment, the original image of the sequence frame animation may be an RGBA image, and in order to reduce the amount of data transmitted, the RGBA image is converted into a YUVA image, and in order to implement further restoration, the method may further include: converting the YUVA image into an RGBA image.
Similarly, the Y component, the V component, and the U component in the YUVA image are converted into the R component, the G component, and the B component, and the a component in the YUVA image is converted into the a component in the RGBA image. Thus, the YUVA image and the RGBA image correspond one-to-one.
Optionally, the step S220 may include:
before playing the sequence frame animation, restoring a YUVA image in the compressed file;
storing image data, wherein the image data is the YUVA image or the converted RGBA image of the YUVA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
the step S230 may include:
reading the image data when the sequence frame animation is played;
and sequentially playing the sequence frame animation according to the playing sequence of the YUVA images.
In this embodiment, the YUVA image is restored before the sequential frame animation is played, so that there is no need to wait until the YUVA image is restored at the time of playing, which causes an overload problem of a processor such as a CPU. Especially, when compressed files with a plurality of sequence frame animations need to be decompressed and played, the YUVA images are restored in advance, so that the load of the CPU playing the sequence frame animations can be greatly reduced, the pause phenomenon in the playing process is reduced, and the playing effect is improved.
In this embodiment, the image data may be image data of the YUVA image, and may also be image data of the RGBA image.
In this embodiment, the image data may be stored in the hard disk at the decompression end, for example, in a predetermined space of the hard disk at the decompression end, and when the sequence frame animation is played in step S230, the image data is directly played after being read from the predetermined space, and the image is not required to be restored while being played.
In other embodiments, if the content space of the decompression end is large or the data amount of the image data is smaller than a specific value, the image data may be stored in a memory space, so that the data is directly read from the memory space during playing, thereby reducing the playing delay again and improving the playing effect.
Optionally, the method
Deleting the first and second blank components in the YUVA image prior to storing the image data.
In this embodiment, the first blank component and the second blank component are substantially used for displaying an image, but the first blank component and the second blank component have no data value, for example, both are "0", so that the first blank component and the second blank component can be deleted, and the storage space consumed in storing image data can be reduced.
In some embodiments, the method further comprises:
if the current load rate is less than the preset load rate, compressing the YUVA image or the image data without the first blank component and the second blank component to obtain compressed data;
the storing the image data includes:
and storing the compressed data.
In this way, when the comparison at the decompression end is idle, the image data of the YUVA image or the YUVA image from which the first blank component and the second blank component are removed may be compressed to further reduce the storage resources consumed in storage.
Optionally, if the current load rate is less than a preset load rate, compressing the YUVA image or the image data from which the first blank component and the second blank component are removed to obtain compressed data, including:
and if the current load rate is less than the preset load rate, compressing the YUVA image or the image data without the first blank component or the second blank component by using a JPEG (joint photographic experts group) compression technology.
In this embodiment, compressing the YUVA image or the image data from which the first blank component and the second blank component are removed by JPEG compression calculation can achieve a higher compression rate to reduce the data amount as much as possible.
In some implementations, the step S230 may include:
determining whether there is compressed data of the YUVA image;
and if the compressed data exist, decompressing the compressed data to restore the YUVA image.
In some implementations, the step S230 may further include:
and if the compressed data does not exist, reading the YUVA image or restoring the image data from which the first blank component and the second blank component are removed.
As shown in fig. 7, the present embodiment provides an image information processing apparatus including:
a receiving module 210, configured to receive a compressed file obtained by video compressing a yuba image based on a key image and a common image, where the yuba image includes: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal to 1/2 the data amount of the a component;
the restoring module 220 is configured to restore the YUVA image carried in the compressed file;
and the playing module 230 is configured to play the sequence frame animation according to the playing order of the YUVA image.
The image information processing apparatus is applicable to a decompression side, for example, a receiving side of a compressed file, or the like.
The receiving module 210, the restoring module 220, and the playing module 230 may all correspond to program modules, and the program modules may receive the compressed file, restore the YUVA image, and play a sequence frame animation after being executed by a processor.
Optionally, the apparatus further comprises:
a second conversion module configured to convert the yuba image into an RGBA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component.
Optionally, the restoring module 220 is specifically configured to restore the YUVA image in the compressed file before playing the sequence frame animation; storing image data, wherein the image data is the YUVA image or the converted RGBA image of the YUVA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
the playing module 230 is specifically configured to read the image data when the sequence frame animation is played; and sequentially playing the sequence frame animation according to the playing sequence of the YUVA images.
Optionally, the apparatus further comprises:
a deletion module to delete the first and second blank components in the YUVA image prior to storing the image data.
Optionally, the first compression module is specifically configured to compress the YUVA image or the image data from which the first blank component and the second blank component are removed to obtain compressed data if the current load rate is less than a preset load rate;
the storage module is specifically configured to store the compressed data.
Optionally, the second compression module is specifically configured to compress the YUVA image or the image data from which the first blank component or the second blank component is removed by using a JPEG compression technique if the current load rate is smaller than the preset load rate.
Optionally, the playing module 230 is specifically configured to determine whether there is compressed data of the YUVA image; if the compressed data exist, decompressing the compressed data to restore the YUVA image; or, if there is no compressed data, reading the YUVA image or restoring the image data from which the first blank component and the second blank component are removed.
As shown in fig. 11, the present embodiment provides an electronic apparatus including:
a memory;
and the processor is connected with the memory and used for realizing one or more information processing methods provided by one or more technical schemes applied to the second private network, the database and the first private network by executing the computer-executable instructions on the memory, for example, one or more of the information processing methods shown in fig. 1, fig. 3 and fig. 4.
The memory can be various types of memories, such as random access memory, read only memory, flash memory, and the like. The memory may be used for information storage, e.g., storing computer-executable instructions, etc. The computer-executable instructions may be various program instructions, such as object program instructions and/or source program instructions, and the like.
The processor may be various types of processors, such as a central processing unit, a microprocessor, a digital signal processor, a programmable array, a digital signal processor, an application specific integrated circuit, or an image processor, among others.
The processor may be connected to the memory via a bus. The bus may be an integrated circuit bus or the like.
In some embodiments, the terminal device may further include: a communication interface, which may include: a network interface, e.g., a local area network interface, a transceiver antenna, etc. The communication interface is also connected with the processor and can be used for information transceiving.
In some embodiments, the terminal device further comprises a human-computer interaction interface, for example, the human-computer interaction interface may comprise various input and output devices, such as a keyboard, a touch screen, and the like.
The present embodiments provide a computer storage medium having stored thereon computer-executable instructions; after being executed, the computer-executable instructions can implement one or more image information processing methods provided by the technical solutions, for example, one or more of the methods shown in fig. 1, fig. 3, fig. 4, fig. 6, and fig. 8 to fig. 10.
The computer storage medium may be various recording media including a recording function, for example, various storage media such as a CD, a floppy disk, a hard disk, a magnetic tape, an optical disk, a usb disk, or a removable hard disk. Optionally, the computer storage medium may be a non-transitory storage medium, and the computer storage medium may be readable by a processor, so that after the computer executable instructions stored in the computer storage mechanism are acquired and executed by the processor, the information processing method provided by any one of the foregoing technical solutions can be implemented, for example, the information processing method applied to the terminal device or the information processing method applied to the application server is executed.
The present embodiments also provide a computer program product comprising computer executable instructions; the computer-executable instructions, when executed, enable one or more of the image information processing methods provided by the foregoing aspects, for example, one or more of the methods shown in fig. 1, 3, 4, 6, and 8-10.
Including a computer program tangibly embodied on a computer storage medium, the computer program including program code for performing the method illustrated in the flow chart, the program code may include instructions corresponding to performing the steps of the method provided by embodiments of the present invention. The program product may be various applications or software development kits, etc.
Any of the above embodiments in combination below provide several specific examples:
example 1:
as shown in fig. 8, the present example provides an image information processing method including: decompression, image conversion, coding, buffering, compression and the like. In some application scenarios, the decompression is not necessary.
The decompressing may include:
reading a compressed file, for example, a compressed file compressed by using a video compression technique in the foregoing embodiment;
in order to decompress the read compressed file, a plurality of images are obtained, and image 1, image 2, image 3, and the like are displayed in fig. 8.
If the file needs to be subjected to video compression, the subsequent steps of encoding, caching, compressing and the like are carried out. In some embodiments, the image conversion may be an unnecessary step, for example, if the original image is YUVA image, then no image conversion is needed.
The image conversion, encoding and buffering and compression may include:
reading a file, wherein the read file can be a read image file, and specifically can be an RBGA image as shown in fig. 8;
copying RGBA to perform difference calculation, wherein the difference calculation is based on pixel granularity, for example, performing difference calculation (or based on downsampling difference calculation) one by one on corresponding position pixels in two images, wherein if the difference calculation is larger than a threshold value, (Y) the image is considered as a key image, and if the difference calculation is not (N) the image is represented as a non-key image;
the RBGA image is converted into a yuba image, and components of the yuba image may be as shown in fig. 8, where "0" denotes a blank component, and both the aforementioned first blank component and second blank component may be collectively referred to as a blank component.
In the encoding process, encoding is initialized first. The encoded image frame may be: the video coding is performed according to the key image and the common image, wherein the key image coding can correspond to a key frame in the video, and the common image can correspond to a common frame in the video.
The video header is encoded after the encoding initialization, and the video header may include: video parameters, such as the number of image Frames of a video included in the video, the width, height, video format, and frame rate Per Second (FPS) of the image Frames.
When encoding an image frame, a frame header and buffer data are formed, and the frame header may include: the parameter of the image frame indicates, for example, whether the image frame is a key frame corresponding to a key image, a normal frame corresponding to a normal image, or the like. The buffer data may include image data, for example, image data of YUVA image, which may include: component values of the Y component, the A component, the U component, the V component, the first null component, and the second null component.
If the encoding of all the pictures is completed, encoding the end of the video file may also include: frame header + video data.
And finally, performing video compression. The compression process may include: compression initialization, compression, writing compressed files, for example, writing compressed files to a hard disk, thereby reducing storage resources consumed by storage. In some cases further comprising: and sending the compressed file. The compressed file can be carried in an installation package of an application program of the image application and issued to a receiving end, or can be stored in a server and issued based on a request of the application program in the terminal equipment. In other cases, the method may further comprise: and periodically transmitting the updated compressed file to the terminal equipment where the application program is located.
The use scenario is as follows: a plurality of videos need to be played simultaneously, and each human face sequence frame animation needs to be decoded by a matched video; and (3) encoding: advantages of using Webm coding: the occupied space is small, and the load of the server is reduced.
And (3) decoding:
the first method is as follows: real-time video decoding compressed files or encoding video compressed files.
The second method comprises the following steps: storing the raw image data on a hard disk. Multiple threads decode multiple compressed files.
And the CPU is in an idle period and encodes the original image data into a small file. For example, when the CPU utilization is below a certain threshold, encoding raw image data, for example, JPEG compresses the raw image data, saving consumption of storage resources.
Image coding: for image coding with real image including transparency component, for example, WebP coding adopts YUV + alpha (A) mode, and balances coding time and file size, and jpg compression is used.
Example 2:
as shown in fig. 9, the present example provides an image information processing method, and the image information may include: image data of various images. The method may comprise:
reading the compressed file;
the compressed file is decompressed to obtain a decompressed file 1, a decompressed file 2 and a decompressed file 3, and the implementation is not limited to 3 decompressed files.
The caching operation may include: a video file header, a picture frame header, and buffer data (which may include the original picture data).
The encoding operation may include:
initializing codes;
encoding the image frame;
and judging whether the image is the last image or not, and if not, returning to the cache operation.
The image conversion may include:
obtaining a YUVA image by encoding the image frame;
removing blank components from the YUVA image to obtain a gray scale image;
the raw image data is stored, or alternatively, JPEG compression stores JPEG format compressed data.
And if the current CPU occupancy rate is smaller, reading the original image data from the hard disk and carrying out JPEG compression.
Example 3:
this example provides a playback schematic of a sequence frame animation, comprising:
requesting an image file sequence, wherein the sequence frame animation is formed by switching images according to the sequence, so that the images are stored in the equipment according to the sequence form in some cases, and the image file sequence is formed;
reading an image file from a hard disk;
and judging whether a JPEG image exists, if yes, reading the JPEG image file, and if not, reading the original image file, wherein the original image file may be a YUVA image without blank components.
Converting the YUVA image into an RGBA image;
the result of the file mapping is performed, for example, the YUVA image is mapped to the RGBA image, so that the playing order of the RGBA image can be known.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (32)

1. An image information processing method characterized by comprising:
determining a key image and a common image except the key image in a YUVA image to be compressed; wherein the YUVA image comprises: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal to 1/2 the data amount of the a component;
performing video compression on the YUVA image according to a key image and the common image to obtain a compressed file of the YUVA image;
the YUVA images to be compressed are a plurality of independent YUVA images with content relevance, and the content relevance is that the YUVA images played back and forth have similarity; the multiple independent YUVA images are divided into multiple groups of images to be compressed according to the similarity, and when the images are compressed, the same part of data compression is carried out on each group of YUVA images;
the determining a key image in a YUVA image to be compressed and a normal image other than the key image includes:
estimating the difference degree according to two images in the YUVA image to be compressed to obtain the transparency difference between the two images;
if the transparency difference is larger than a transparency difference threshold value, taking at least one of the two images as the key image;
if the transparency difference is smaller than a transparency difference threshold, acquiring the color difference between the two images; determining the number of calculation components participating in difference calculation based on the color difference; selecting calculation components according to the number of the calculation components to participate in differential calculation; and determining the key image and the common image according to the difference calculation result.
2. The method of claim 1,
the determining of the key image and the normal image except the key image in the YUVA image to be compressed includes:
determining a calculation component, wherein the calculation component is: one or more of the Y component, the V component, and the U component, or the calculation component is: one or more of the Y component, the V component, and the conversion component of the U component;
determining a difference calculation weight according to the component A;
obtaining an nth calculation value of the nth YUVA image and an n +1 th calculation value of the n +1 th YUVA image based on the extracted color difference components and the calculation weight, wherein n is a positive integer;
carrying out difference calculation on the nth calculation value and the n +1 th calculation value;
and determining a key image and the common image according to the difference calculation result.
3. The method of claim 2,
determining the key image and the common image according to the difference calculation result comprises:
if the calculated value n +1 is outside a preset range compared with the difference calculation result of the calculated value n, the n +1 YUVA image is the key image;
and/or the presence of a gas in the gas,
if the difference calculation result of the n +1 th calculated value compared with the n-th calculated value is within the preset range, the n +1 th YUVA image is the normal image.
4. The method of claim 2,
the determining a difference calculation weight according to the component a includes:
and carrying out normalization processing on the component A to obtain the weight value calculated by the difference.
5. The method of claim 1, further comprising:
converting an original RGBA image into the YUVA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
wherein the compressed data amount of the YUVA image is smaller than the compressed data amount of the RGBA image.
6. The method according to claim 5, wherein the determining a key picture and a normal picture other than the key picture in the YUVA picture to be compressed comprises:
extracting a color component in the RGBA image, wherein the color component comprises: the R component, the G component, and the B component;
determining a difference calculation weight according to the component A;
obtaining an nth calculation value of the nth RGBA image and an nth +1 calculation value of n +1 RGBA images based on the extracted color difference components and the calculation weight, wherein n is a positive integer;
carrying out difference calculation on the nth calculation value and the n +1 th calculation value;
and determining a key image and the common image according to the difference calculation result.
7. The method according to any one of claims 1 to 6,
and the YUVA images are image frames of a sequence frame animation.
8. The method according to any one of claims 1 to 6,
the video compression of the YUVA image and the obtaining of the compressed file of the YUVA image according to the key image and the normal image include:
and performing video compression on the YUVA image by adopting a VP9 coding mode according to the key image and the common image.
9. An image information processing method characterized by comprising:
receiving a compressed file obtained by video compressing a YUVA image based on a key image and a common image, wherein the YUVA image comprises: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal to 1/2 the data amount of the a component;
restoring the YUVA image carried in the compressed file;
playing the sequence frame animation according to the playing sequence of the YUVA image;
the YUVA images carried in the compressed file are a plurality of independent YUVA images with content relevance, and the content relevance is realized by the similarity between the front and back played YUVA images; compressing the multiple independent YUVA images into multiple groups of compressed images according to the similarity, and performing data compression on the same part of each group of YUVA images during compression;
the key image and the common image are subjected to prediction determination of the difference degree according to two images in a YUVA image to be compressed;
the key image and the common image are subjected to prediction determination of the difference degree according to two images in the YUVA image to be compressed, and the method is realized by executing the following operations:
estimating the difference degree according to two images in the YUVA image to be compressed to obtain the transparency difference between the two images;
if the transparency difference is larger than a transparency difference threshold value, taking at least one of the two images as the key image;
if the transparency difference is smaller than a transparency difference threshold, acquiring the color difference between the two images; determining the number of calculation components participating in difference calculation based on the color difference; selecting calculation components according to the number of the calculation components to participate in differential calculation; and determining the key image and the common image according to the difference calculation result.
10. The method of claim 9, further comprising:
converting the YUVA image to an RGBA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component.
11. The method according to claim 9 or 10,
the restoring the YUVA image carried in the compressed file comprises the following steps:
restoring a YUVA image in the compressed file before playing the sequence frame animation;
storing image data, wherein the image data is the YUVA image or the converted RGBA image of the YUVA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
the playing of the sequence frame animation according to the playing sequence of the YUVA image comprises the following steps:
reading the image data when the sequence frame animation is played;
and sequentially playing the sequence frame animation according to the playing sequence of the YUVA images.
12. The method of claim 10, further comprising:
deleting the first and second blank components in the YUVA image prior to storing the image data.
13. The method of claim 11, further comprising:
if the current load rate is less than the preset load rate, compressing the YUVA image or the image data without the first blank component and the second blank component to obtain compressed data;
the storing the image data includes:
and storing the compressed data.
14. The method of claim 13,
if the current load rate is less than a preset load rate, compressing the YUVA image or the image data from which the first blank component and the second blank component are removed to obtain compressed data, including:
and if the current load rate is less than the preset load rate, compressing the YUVA image or the image data without the first blank component or the second blank component by using a JPEG (joint photographic experts group) compression technology.
15. The method of claim 11, wherein playing the sequence frame animation in the order in which the YUVA images are played comprises:
determining whether there is compressed data of the YUVA image;
if the compressed data exist, decompressing the compressed data to restore the YUVA image;
alternatively, the first and second electrodes may be,
and if the compressed data does not exist, reading the YUVA image or restoring the image data from which the first blank component and the second blank component are removed.
16. An image information processing apparatus characterized by comprising:
the device comprises a determining module, a compressing module and a compressing module, wherein the determining module is used for determining a key image in a YUVA image to be compressed and a common image except the key image; wherein the YUVA image comprises: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal to 1/2 the data amount of the a component; the YUVA images to be compressed are a plurality of independent YUVA images with content relevance, and the content relevance is that the YUVA images played back and forth have similarity; the multiple independent YUVA images are divided into multiple groups of images to be compressed according to the similarity, and when the images are compressed, the same part of data compression is carried out on each group of YUVA images;
the first compression module is used for carrying out video compression on the YUVA image according to a key image and the common image and obtaining a compressed file of the YUVA image;
the determining module is further configured to perform pre-estimation of a difference degree according to two images in the YUVA image to be compressed, so as to obtain a transparency difference between the two images;
if the transparency difference is larger than a transparency difference threshold value, taking at least one of the two images as the key image;
if the transparency difference is smaller than a transparency difference threshold, acquiring the color difference between the two images; determining the number of calculation components participating in difference calculation based on the color difference; selecting calculation components according to the number of the calculation components to participate in differential calculation; and determining the key image and the common image according to the difference calculation result.
17. The apparatus of claim 16,
the determining module includes:
a first determining unit, configured to determine a calculation component, where the calculation component is: one or more of the Y component, the V component, and the U component, or the calculation component is: one or more of the Y component, the V component, and the conversion component of the U component;
the second determining unit is used for determining a difference calculation weight according to the component A;
a first calculating unit, configured to obtain an nth calculated value of the nth YUVA image and an n +1 th calculated value of the n +1 th YUVA image based on the extracted color difference components and the calculated weights, where n is a positive integer;
a second calculation unit configured to perform difference calculation on the nth calculation value and the (n + 1) th calculation value;
and the third determining unit is used for determining the key image and the common image according to the difference calculation result.
18. The apparatus of claim 17,
the third determining unit is specifically configured to determine that the n +1 th YUVA image is the key image if a difference calculation result of the n +1 th calculated value compared to the nth calculated value is outside a preset range; and/or, if the difference calculation result of the n +1 th calculated value compared with the n-th calculated value is within the preset range, the n +1 th YUVA image is the normal image.
19. The apparatus of claim 17,
the second determining unit is specifically configured to perform normalization processing on the component a to obtain a difference calculation weight.
20. The apparatus of claim 16, further comprising:
a first conversion module configured to convert an original RGBA image into the yuba image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
wherein the compressed data amount of the YUVA image is smaller than the compressed data amount of the RGBA image.
21. The apparatus of claim 20, wherein the determining module is specifically configured to extract a color component in the RGBA image, and wherein the color component comprises: the R component, the G component, and the B component; determining a difference calculation weight according to the component A; obtaining an nth calculation value of the nth RGBA image and an nth +1 calculation value of n +1 RGBA images based on the extracted color difference components and the calculation weight, wherein n is a positive integer; carrying out difference calculation on the nth calculation value and the n +1 th calculation value; and determining a key image and the common image according to the difference calculation result.
22. The apparatus of any one of claims 16 to 21,
and the YUVA images are image frames of a sequence frame animation.
23. The apparatus of any one of claims 16 to 21,
the first compression module is specifically configured to perform video compression on the YUVA image by using a VP9 encoding method according to the key image and the normal image.
24. An image information processing apparatus characterized by comprising:
a receiving module, configured to receive a compressed file obtained by video compressing a yuba image based on a key image and a common image, where the yuba image includes: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal to 1/2 the data amount of the a component; the key image and the common image are subjected to prediction determination of the difference degree according to two images in a YUVA image to be compressed; the key image and the common image are subjected to prediction determination of the difference degree according to two images in the YUVA image to be compressed, and the method is realized by executing the following operations: estimating the difference degree according to two images in the YUVA image to be compressed to obtain the transparency difference between the two images; if the transparency difference is larger than a transparency difference threshold value, taking at least one of the two images as the key image; if the transparency difference is smaller than a transparency difference threshold, acquiring the color difference between the two images; determining the number of calculation components participating in difference calculation based on the color difference; selecting calculation components according to the number of the calculation components to participate in differential calculation; determining the key image and the common image according to the difference calculation result;
the restoration module is used for restoring the YUVA image carried in the compressed file;
the playing module is used for playing the sequence frame animation according to the playing sequence of the YUVA image;
the YUVA images carried in the compressed file are a plurality of independent YUVA images with content relevance, and the content relevance is realized by the similarity between the front and back played YUVA images; and compressing the plurality of independent YUVA images into a plurality of groups of compressed images according to the similarity, and performing data compression on the same part of each group of YUVA images during compression.
25. The apparatus of claim 24, further comprising:
a second conversion module configured to convert the yuba image into an RGBA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component.
26. The apparatus of claim 24 or 25,
the restoring module is specifically configured to restore the YUVA image in the compressed file before playing the sequence frame animation; storing image data, wherein the image data is the YUVA image or the converted RGBA image of the YUVA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
the playing module is specifically used for reading the image data when the sequence frame animation is played; and sequentially playing the sequence frame animation according to the playing sequence of the YUVA images.
27. The apparatus of claim 24, further comprising:
a deletion module to delete the first and second blank components in the YUVA image prior to storing image data.
28. The apparatus of claim 26, further comprising:
a second compression module, configured to compress the YUVA image or the image data from which the first blank component and the second blank component are removed to obtain compressed data, if the current load rate is less than a preset load rate;
and the storage module is specifically used for storing the compressed data.
29. The apparatus of claim 28,
the second compression module is specifically configured to compress the YUVA image or the image data from which the first blank component or the second blank component is removed by using a JPEG compression technique if the current load rate is smaller than the preset load rate.
30. The apparatus according to claim 26, wherein the playback module is specifically configured to determine whether compressed data of the YUVA image is available; if the compressed data exist, decompressing the compressed data to restore the YUVA image; or, if there is no compressed data, reading the YUVA image or restoring the image data from which the first blank component and the second blank component are removed.
31. An electronic device, comprising:
a memory;
a processor coupled to the memory for enabling the method provided by any one of claims 1 to 8 or 10 to 15 by executing computer executable instructions located on the memory.
32. A computer storage medium having stored thereon computer-executable instructions; the computer-executable instructions, when executed, enable the method provided by any one of claims 1 to 8 or 10 to 15 to be carried out.
CN201810559099.7A 2018-06-01 2018-06-01 Image information processing method and device, and storage medium Active CN108668169B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810559099.7A CN108668169B (en) 2018-06-01 2018-06-01 Image information processing method and device, and storage medium
CN202111165241.8A CN113766319A (en) 2018-06-01 2018-06-01 Image information processing method and device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810559099.7A CN108668169B (en) 2018-06-01 2018-06-01 Image information processing method and device, and storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202111165241.8A Division CN113766319A (en) 2018-06-01 2018-06-01 Image information processing method and device, and storage medium

Publications (2)

Publication Number Publication Date
CN108668169A CN108668169A (en) 2018-10-16
CN108668169B true CN108668169B (en) 2021-10-29

Family

ID=63775285

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201810559099.7A Active CN108668169B (en) 2018-06-01 2018-06-01 Image information processing method and device, and storage medium
CN202111165241.8A Withdrawn CN113766319A (en) 2018-06-01 2018-06-01 Image information processing method and device, and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202111165241.8A Withdrawn CN113766319A (en) 2018-06-01 2018-06-01 Image information processing method and device, and storage medium

Country Status (1)

Country Link
CN (2) CN108668169B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115016921B (en) * 2021-10-22 2023-10-13 荣耀终端有限公司 Resource scheduling method, device and storage medium
CN115550661B (en) * 2022-11-25 2023-03-24 统信软件技术有限公司 Image compression method, restoration method, computing device and readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1361630A (en) * 2002-01-25 2002-07-31 安凯(广州)软件技术有限公司 Cartoon compressing method for radio network and hand-held radio equipment
CN1781314A (en) * 2003-04-30 2006-05-31 诺基亚有限公司 Picture coding method
CN1902938A (en) * 2004-01-05 2007-01-24 皇家飞利浦电子股份有限公司 Processing method and device using scene change detection
CN101594537A (en) * 2009-06-04 2009-12-02 京北方科技股份有限公司 Massive image data compression method
CN101742317A (en) * 2009-12-31 2010-06-16 北京中科大洋科技发展股份有限公司 Video compressing and encoding method with alpha transparent channel
CN101820545A (en) * 2010-05-04 2010-09-01 北京数码视讯科技股份有限公司 Encoding method of macro block of video frame inserting area
CN105913096A (en) * 2016-06-29 2016-08-31 广西大学 Extracting method for disordered image key frame
CN106375759A (en) * 2016-08-31 2017-02-01 深圳超多维科技有限公司 Video image data coding method and device, and video image data decoding method and device
CN107005715A (en) * 2014-10-14 2017-08-01 诺基亚技术有限公司 Coding image sequences and the device of decoding, method and computer program

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5240349B2 (en) * 2011-11-14 2013-07-17 カシオ計算機株式会社 Image composition apparatus and program
US20140355665A1 (en) * 2013-05-31 2014-12-04 Altera Corporation Adaptive Video Reference Frame Compression with Control Elements
CN104935832B (en) * 2015-03-31 2019-07-12 浙江工商大学 For the video keying method with depth information

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1361630A (en) * 2002-01-25 2002-07-31 安凯(广州)软件技术有限公司 Cartoon compressing method for radio network and hand-held radio equipment
CN1781314A (en) * 2003-04-30 2006-05-31 诺基亚有限公司 Picture coding method
CN1902938A (en) * 2004-01-05 2007-01-24 皇家飞利浦电子股份有限公司 Processing method and device using scene change detection
CN101594537A (en) * 2009-06-04 2009-12-02 京北方科技股份有限公司 Massive image data compression method
CN101742317A (en) * 2009-12-31 2010-06-16 北京中科大洋科技发展股份有限公司 Video compressing and encoding method with alpha transparent channel
CN101820545A (en) * 2010-05-04 2010-09-01 北京数码视讯科技股份有限公司 Encoding method of macro block of video frame inserting area
CN107005715A (en) * 2014-10-14 2017-08-01 诺基亚技术有限公司 Coding image sequences and the device of decoding, method and computer program
CN105913096A (en) * 2016-06-29 2016-08-31 广西大学 Extracting method for disordered image key frame
CN106375759A (en) * 2016-08-31 2017-02-01 深圳超多维科技有限公司 Video image data coding method and device, and video image data decoding method and device

Also Published As

Publication number Publication date
CN113766319A (en) 2021-12-07
CN108668169A (en) 2018-10-16

Similar Documents

Publication Publication Date Title
JP2020174374A (en) Digital image recompression
WO2021068598A1 (en) Encoding method and device for screen sharing, and storage medium and electronic equipment
CN113424547A (en) Techniques and apparatus for weighted median prediction for point cloud attribute encoding and decoding
US9609338B2 (en) Layered video encoding and decoding
CN113454691A (en) Method and device for encoding and decoding self-adaptive point cloud attributes
US8760366B2 (en) Method and system for remote computing
CN111131828B (en) Image compression method and device, electronic equipment and storage medium
US8682091B2 (en) Real-time image compression
US20070064275A1 (en) Apparatus and method for compressing images
CN108668169B (en) Image information processing method and device, and storage medium
CN108668170B (en) Image information processing method and device, and storage medium
EP2843954B1 (en) Lossy color compression using adaptive quantization
US10250892B2 (en) Techniques for nonlinear chrominance upsampling
CN110891195B (en) Method, device and equipment for generating screen image and storage medium
CN111526366B (en) Image processing method, image processing apparatus, image capturing device, and storage medium
CN114782249A (en) Super-resolution reconstruction method, device and equipment for image and storage medium
CN101065760B (en) System and method for processing image data
CN115250351A (en) Compression method, decompression method and related products for image data
US20230262210A1 (en) Visual lossless image/video fixed-rate compression
US6829390B2 (en) Method and apparatus for transmitting image updates employing high compression encoding
CN116708793B (en) Video transmission method, device, equipment and storage medium
CN117278765B (en) Video compression method, device, equipment and storage medium
RU2782583C1 (en) Block-based image merging for context segmentation and processing
US20230025378A1 (en) Task-driven machine learning-based representation and compression of point cloud geometry
WO2024007144A1 (en) Encoding method, decoding method, code stream, encoders, decoders and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant