CN108668170B - Image information processing method and device, and storage medium - Google Patents

Image information processing method and device, and storage medium Download PDF

Info

Publication number
CN108668170B
CN108668170B CN201810559284.6A CN201810559284A CN108668170B CN 108668170 B CN108668170 B CN 108668170B CN 201810559284 A CN201810559284 A CN 201810559284A CN 108668170 B CN108668170 B CN 108668170B
Authority
CN
China
Prior art keywords
component
image
images
data
frame animation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810559284.6A
Other languages
Chinese (zh)
Other versions
CN108668170A (en
Inventor
荆锐
赵代平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201810559284.6A priority Critical patent/CN108668170B/en
Publication of CN108668170A publication Critical patent/CN108668170A/en
Application granted granted Critical
Publication of CN108668170B publication Critical patent/CN108668170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation

Abstract

The embodiment of the invention provides an image information processing method and device and a storage medium. The image information processing method includes: restoring the images in the compressed file compressed by the video compression technology before playing the sequence frame animation; storing the image; and reading the images and sequentially playing the images according to the playing sequence of the images when the sequence frame animation is played.

Description

Image information processing method and device, and storage medium
Technical Field
The present invention relates to the field of information technologies, and in particular, to an image information processing method and apparatus, and a storage medium.
Background
The sequence frame is an image sequence formed by a plurality of images in sequence; the sequence frame animation refers to a playing technology for playing each image in the sequence frame one by one in sequence.
A file of a sequential frame animation generally includes a plurality of images. If these multiple images are directly transmitted, the amount of data to be transmitted is large, and the occupied transmission bandwidth is large.
In order to reduce the amount of data, the image is compressed and then transmitted. After receiving the compressed file, the receiving end needs to decompress the file before playing the sequence frame animation. In practical applications, it is found that sometimes a problem of an excessive utilization rate of a processor such as a Central Processing Unit (CPU) occurs when a sequence frame animation is played, and even some threads are jammed.
Disclosure of Invention
In view of the above, embodiments of the present invention are directed to a method and an apparatus for processing image information, and a storage medium.
The technical scheme of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image information processing method, including:
restoring the images in the compressed file compressed by the video compression technology before playing the sequence frame animation;
storing the image;
and reading the images and sequentially playing the images according to the playing sequence of the images when the sequence frame animation is played.
Optionally, the method further comprises:
if the current load rate is smaller than the preset load rate, compressing the image to obtain compressed data;
the storing the image comprises:
and storing the compressed data.
Optionally, if the current load rate is smaller than a preset load rate, compressing the image to obtain compressed data includes:
and if the current load rate is less than the preset load rate, compressing the image by using a JPEG compression technology.
Optionally, the image is a YUVA image, wherein the YUVA image includes: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal 1/2 to the data amount of the a component.
Optionally, the storing the image comprises:
storing the YUVA image;
or the like, or, alternatively,
a Y component, a U component, a V component, and an A component of the YUVA image.
Optionally, the method further comprises:
converting the YUVA image into an RGBA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
the storing the image comprises:
storing the RGBA image;
alternatively, the first and second electrodes may be,
storing a compressed file of the RGBA image.
In a second aspect, an embodiment of the present invention provides an image information processing apparatus, including:
the restoring module is used for restoring the images in the compressed file compressed by the video compression technology before the sequence frame animation is played;
a storage module for storing the image;
and the playing module is used for reading the images and playing the images in sequence according to the playing sequence of the images when the sequence frame animation is played.
Optionally, the apparatus further comprises:
the compression module is used for compressing the image to obtain compressed data if the current load rate is smaller than a preset load rate;
the storage module is specifically configured to store the compressed data.
Optionally, the compression module is specifically configured to compress the image by using a JPEG compression technique if the current load rate is smaller than the preset load rate.
Optionally, the image is a YUVA image, wherein the YUVA image includes: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal 1/2 to the data amount of the a component.
Optionally, the storage module is specifically configured to store the YUVA image; or, a Y component, a U component, a V component, and an a component of the YUVA image.
Optionally, the apparatus further comprises:
a conversion module configured to convert the yuba image into an RGBA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
the storage module is specifically configured to store the RGBA image; alternatively, a compressed file of the RGBA image is stored.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
a memory;
and the processor is connected with the memory and used for realizing the image information processing method provided by one or more of the technical schemes by executing the computer executable instructions on the memory.
In a fourth aspect, embodiments of the present invention provide a computer storage medium having stored thereon computer-executable instructions; the computer-executable instructions, when executed, enable one or more of the foregoing image information processing methods.
In a fifth aspect, an embodiment of the present invention provides a computer program product, which includes computer-executable instructions; after being executed, the computer-executable instructions can implement the image information processing method provided by one or more of the technical solutions.
In the technical solution provided in the embodiments of the present invention, after receiving a compressed file in a video format, the compressed file is decompressed before playing a sequence frame animation, and an image in the compressed file is exchanged, so that after the compressed file compressed by a video compression technique is decompressed in advance, the decompressed image can be stored in a specific storage interval, and before playing the sequence frame animation, the image is only required to be read from the specific storage interval and played. Therefore, the problem that the load rate of processors such as a CPU (central processing unit) is too high due to the fact that the processors are decompressed, restored and played before the sequence frame animation is played is solved, and therefore the problems that threads are started and stopped due to the fact that resources of the CPU are insufficient are caused, the smoothness of playing the sequence frame animation is improved, and the playing effect of playing the sequence frame animation is improved.
Drawings
Fig. 1 is a schematic flowchart of a first image information processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a second image information processing method according to an embodiment of the present invention;
FIG. 3 is an equivalent diagram of components of an image according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an image information processing apparatus according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a third method for processing image information according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a fourth image information processing method according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating a fifth image information processing method according to an embodiment of the present invention;
FIG. 8 is a flowchart illustrating a sixth image information processing method according to an embodiment of the present invention;
FIG. 9 is a flowchart illustrating a seventh image information processing method according to an embodiment of the present invention;
fig. 10 is a flowchart illustrating an eighth image information processing method according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail with reference to the drawings and the specific embodiments of the specification.
As shown in fig. 1, the present embodiment provides an image information processing method including:
step S110: restoring the images in the compressed file compressed by the video compression technology before playing the sequence frame animation;
step S120; storing the image;
step S130: and reading the images and sequentially playing the images according to the playing sequence of the images when the sequence frame animation is played.
The method provided by this embodiment can be applied to a decompression end, which can be a receiving end of the compressed file, but is not limited to the receiving end, and in a specific case, the decompression end can also be a compression end of the compressed file.
The compressed file can be generated by the decompressing end or received from the compressing end. The compression end can also be a sending end of the compressed file, and the method can be a server or a terminal device for providing the image information. The server can be a cloud server or a server group applied to the network. The terminal device can be various electronic devices, such as a mobile phone, a wearable device, a virtual reality device or an augmented reality device.
In this embodiment, the compressed file is a compressed file compressed by a video compression technique. The video compression technique includes at least: the compression between the images of different images can be based on the compression between the images, and the data content of the same part between the images can be deleted as far as possible, so that the data volume can be reduced by single intra-image compression, and the compression efficiency is improved. In another aspect, the video compression technique may further include: intra-image compression of a single image. Therefore, the compressed file compressed by adopting the video compression technology has the characteristics of small data volume, small occupied storage resource and small transmission resource consumed by transmission. In this embodiment, the video compression technology may include a Webn compressed file, and the like, and is not limited to the Webn compressed file in specific implementation.
In forming a compressed file in video compression, the method further comprises:
the method comprises the steps of carrying out grouping compression on each image in a compressed serial frame animation based on the similarity between the images, dividing the images into a key image and common images except the key image, and thus, one compressed file can comprise a plurality of compression groups, each compression group comprises a key image and common images with good similarity to the key image, and compressing the images by regarding all the images as a group through grouping compression, so that the data volume after compression can be further reduced based on the image similarity between the groups. The compressed file includes: one or more compression groups.
In this embodiment, the restoration of the image is completed before the sequential frame animation is played, so that there is no need to wait for the restoration of the image at the time of playing, which leads to the problem of overloading of a processor such as a CPU. Especially, when compressed files with a plurality of sequence frame animations need to be decompressed and played, the YUVA images are restored in advance, so that the load of the CPU playing the sequence frame animations can be greatly reduced, the pause phenomenon in the playing process is reduced, and the playing effect is improved.
Storing the image in step S120 may include: storing raw data for an image, the raw data may include: the pixel value of each pixel in the image. The image may be various types of images, for example, an RGB image; YUV images, RGBA images, YUVA images, and the like. The RGB image includes: a red R component, a green G component, and a blue B component. The RGBA image includes: a red R component, a green G component, a blue B component, and a transparency a component;
in this embodiment, the image data may be image data of the YUVA image, and may also be image data of the RGBA image. The YUVA image includes: a luminance Y component, a first color difference U component, a second color difference V component, and a transparency a component. Alternatively, the YUVA image includes: a luminance Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component. The YUV image includes: a luminance Y component, a first color difference U component, and a second color difference V component.
Here, each component represents a data value (also referred to as a component value) of a certain dimension of a pixel, and these component groups constitute the pixel value. The step S120 may include: and storing component values corresponding to the components of the pixel.
In this embodiment, the image data may be stored in the hard disk at the decompression end, for example, in a predetermined space of the hard disk at the decompression end, and when the sequence frame animation is played in step S130, the image data is directly played after being read from the predetermined space, and the image is not required to be restored while being played.
Optionally, as shown in fig. 2, the method further includes:
step S111: if the current load rate is smaller than the preset load rate, compressing the image to obtain compressed data;
the step S120 may include the step S121: and storing the compressed data.
The step S120 may include the step S131: and when the sequence frame animation is played, reading the compressed data, decompressing the compressed data to obtain an image, and playing the image according to the playing sequence of the image.
In order to reduce the data size increased after the restoring, the embodiment of the invention further comprises, before storing the image: and compressing the image when the load rate of a processor such as a CPU is lower, for example, lower than the preset load rate, so as to obtain compressed data. And finally, the compressed data stored in step S120 can be intra-image compressed of a single image, so that the subsequent restoration to the image requires a small amount of computation and a small delay. And the data volume can be well reduced by utilizing the image internal compression of the image, thereby reducing the storage resource consumed by storing the image. In step S120, the image may be stored on a hard disk, and in other embodiments, step S120 may further include: storing the image on a memory. If the sequence frame animation is stored in the memory and needs to be played, the data is directly read from the memory, and the method has the characteristic of high efficiency.
In some implementations, the preset load rate may be 60% or 70% of the utilization rate of the processor such as the CPU, and the preset load rate is only an example, and is not limited to these values in specific implementations.
In some embodiments, the method may comprise: and if the tag information indicates that the sequence frame animation is a common sequence frame animation or a basic sequence frame animation, the compressed data or the restored data can be directly stored in a memory. For example, for some interesting image processing applications or software development tools, some interesting sequential frame animations are usually provided, some sequential frame animations are the most basic sequential frame animations of the image processing applications or software development tools or are the sequential frame animations frequently used by users, and thus, the data is read from the hard disk to the memory when the image processing applications or software development tools are started. Alternatively, steps S110 to S120 are performed when the image processing application or the software development tool is started or steps S110 to S120 are performed immediately after the start. Thus, the storage location of the image at this time may be a hard disk, and is preferentially stored in the memory.
Playing the sequence frame animation in step S130 may include:
detecting an operation instruction, wherein the operation instruction can comprise: a user instruction received from a human-machine interaction interface;
and reading the images from a hard disk and/or a memory space based on the operation instruction and sequentially playing the images, thereby realizing the playing of the sequence frame animation.
Optionally, if the current load rate is smaller than a preset load rate, compressing the image to obtain compressed data includes:
and if the current load rate is less than the preset load rate, compressing the image by using a JPEG compression technology.
In the embodiment, a JPEG compression technology is used for compression, and a JPEG image is obtained after compression, and the JPEG image has the characteristics of small image quality loss and large compression amount.
Optionally, as shown in fig. 3, the image is a YUVA image, wherein the YUVA image includes: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal 1/2 to the data amount of the a component. There are many implementations for an effect diagram of each component of a YUVA image, and the implementation is not limited to any of the above.
In this embodiment, the YUVA image is a completely new image, and the YUVA image includes an a component, a first blank component, and a second blank component in addition to the original Y component, U component, and V component.
In this embodiment, the Y component and the a component may include: component values of W x H pixels, said W being the number of columns of pixels in one said YUVA image; the H is the number of rows of pixels in one of the YUVA images. For example, the Y component may include: luminance values of W × H pixels; the a component may include: the transparency values of W x H pixels.
In this embodiment, the U component and the V component each include: W1/2H pixels. Of course, the number of pixel rows corresponding to the U component and the V component is different.
In this embodiment, the Y component and the a component may include: component values of W x H pixels, said W being the number of columns of pixels in one said YUVA image; the H is the number of rows of pixels in one of the YUVA images. For example, the Y component may include: luminance values of W × H pixels; the a component may include: the transparency values of W x H pixels.
In this embodiment, the U component and the V component each include: w H/4 pixel chrominance values. Of course, the number of pixel rows corresponding to the U component and the V component is different.
In image processing, the Y component and the a component attribute one component to one first channel, and thus, the data size corresponding to one first channel is: component values of W x H pixels.
The U component and the V component may be regarded as components of a second channel, and if the second channel only includes the U component and the Y component, it is obvious that the corresponding data amount of this channel is: the component values of W × H/2 pixels are less than the data amount of the first channel, so that an image playing error can be caused due to mismatch of the data amounts of the first channel and the second channel when the image is displayed. Therefore, in the present embodiment, a first blank component and a second blank component are also introduced; the first blank component and the second blank component can both be components with the value of 0. The sum of the data quantities of the U component and the first blank component is W x H/2 pixel component values; the sum of the data amounts of the V component and the second blank component is W × H/2 pixel component values. Thus, the sum of the U component, the V component, the first blank component and the second blank component is equal to the component values of W × H pixels. At this time, if the component of the second channel includes a U component, a V component, a first blank component, and a second blank component, the problem of an image presentation error caused by a difference in data amount included in the first channel and the second channel is solved. Therefore, a yuba image can not only include a luminance component and a chrominance component, but also include a transparency component, and if the yuba image is decoded and displayed, an image with own transparency can be directly output, without needing an additional gray-scale image to embody the transparency as in the prior art, thereby reducing the number of images and reducing the data amount generated by the number of images.
As such, in some embodiments the step S120 may include: storing the YUVA image;
in other embodiments, the step S120 may include: a Y component, a U component, a V component, and an A component of the YUVA image. Since the component values of each pixel corresponding to the first blank component and the second blank component are the same, for example, are all "0", the component value of the blank component of each pixel does not need to be stored separately, so that the amount of data required to be stored can be further reduced, and the consumption of storage space is reduced. Thus, when the YUVA image is subsequently displayed, the first blank component and the second blank component are filled according to the stored Y component, U component, V component, and the number of pixels corresponding to the a component, and the sequence frame animation in step S130 is played by combining the Y component, U component, V component, and a component, the first blank component, and the second blank component.
Optionally, the method further comprises:
converting the YUVA image into an RGBA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
the step S120 may include: storing the RGBA image; alternatively, a compressed file of the RGBA image is stored.
In this embodiment, the YUVA image is converted into an RGBA image, which facilitates fast display of the image processor based on three RGB color components, so the method in this embodiment further includes: the YUVA image is converted into the RGBA image, so that if the display device displays based on RGB three color components, if the conversion is completed in advance, the subsequent playing can be further accelerated, and the problem of overhigh utilization rate of a CPU (Central processing Unit) generated when the sequence frame animation is played is solved.
In some embodiments, the method further comprises:
receiving label information sent by a server or determining a basic sequence frame animation or a common sequence frame animation by a playing terminal (namely a decompressing terminal) according to the frequency of playing the animation sequence frame animation by the playing terminal, and generating corresponding label information; the tag information may be determined according to the frequency of use of different sequence frame animations;
and when the image processing application is started or after the image processing application is started, restoring the images of the basic sequence frame animation or the common sequence frame animation before the sequence frame animation is played according to the label information. In other embodiments, the method further comprises: and storing the image in a hard disk or a memory.
In this way, the playing terminal or the server can dynamically determine the basic sequence frame animation or the common sequence frame animation.
As shown in fig. 4, the present embodiment provides an image information processing apparatus including:
a restoring module 110, configured to restore an image in a compressed file compressed by a video compression technique before playing the sequence frame animation;
a storage module 120 for storing the image;
and the playing module 130 is configured to, when the sequence frame animation is played, read the images and sequentially play the images according to the playing order of the images.
The restoring module 110, the storage module 120, and the playing module 130 may all correspond to program modules, and the program modules may receive the compressed file, restore the images of the sequence frame animation, and play the sequence frame animation after being executed by the processor.
Optionally, the apparatus further comprises:
the compression module is used for compressing the image to obtain compressed data if the current load rate is smaller than a preset load rate;
the storage module 120 is specifically configured to store the compressed data.
The compression module can adopt the intra-image compression of a single image, the data volume of the single image can be reduced through the intra-image compression, but the compression module has the characteristic of high decompression speed when the compressed single image is restored.
In the compressed data directly stored in the storage module 120 in this embodiment, the data size of the compressed data is smaller than that of the image data of a single image, so that the occupied storage resource is small.
Optionally, the compression module is specifically configured to compress the image by using a JPEG compression technique if the current load rate is smaller than the preset load rate.
The preset load rate may be 60% or 70% of the utilization rate of the processor such as the CPU, and the preset load rate is only an example, and is not limited to these values in the specific implementation.
The image compressed by the JPEG compression technology has the characteristics of small image quality loss and high compression rate, so that the image quality is maintained on one hand, and the small storage space occupied by the storage unit is reduced on the other hand.
Optionally, the image is a YUVA image, wherein the YUVA image includes: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal 1/2 to the data amount of the a component.
In some further embodiments, the storage module 120 is specifically configured to store the YUVA image; or, a Y component, a U component, a V component, and an a component of the YUVA image.
Furthermore, the apparatus further comprises:
a conversion module configured to convert the yuba image into an RGBA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
the storage module 120 is further specifically configured to store the RGBA image; alternatively, a compressed file of the RGBA image is stored.
As shown in fig. 5, an embodiment of the present invention further provides an image information processing method, including:
step S210: determining a key image in an image to be compressed and a common image except the key image; wherein the image comprises: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal to 1/2 the data amount of the a component;
step S220: and according to the key image and the common image, performing video compression on the image and obtaining a compressed file of the image. The compressed file may be a compressed file of the aforementioned sequence frame animation.
The image information processing method provided in some embodiments may be applied to a compression end, which may also be a sending end, and may be a server or a terminal device providing the image information. The server can be a cloud server or a server group applied to the network. The terminal device can be various electronic devices, such as a mobile phone, a wearable device, a virtual reality device or an augmented reality device.
The images to be compressed in step S210 may be a plurality of independent images having content relevance. In this embodiment, since the image has content relevance, the content relevance may be expressed as: the two images played back and forth have similarity, and the similarity can be described by similarity or difference. In the implementation, the video compression is required by utilizing the similarity, so that a plurality of independent image files are converted into the video for compression, and the data volume can be reduced as much as possible by utilizing the video compression technology, thereby achieving the characteristics of less storage resources required for storing the compressed file, small flow consumed for transmitting the compressed file and small occupied bandwidth.
With the video compression technique in the present embodiment, on the one hand, inter-image compression between adjacent images can be achieved, and thus, for example, redundant data removal of the same portion between images can be achieved. Meanwhile, a key image and a normal image are determined in the implementation, wherein the normal image is an image except the key image; the key image here may be an image of which the difference degree of the previous image is greater than a preset value, for example, if the difference degree between the s +1 th image and the s th image is greater than the preset value (i.e., the similarity is less than a specific value), the s +1 th image may be considered as the key image; an image having a degree of difference from the key image smaller than a predetermined value may be referred to as a normal image of the key image. In the embodiment, through the distinction between the key image and the common image, on the other hand, a plurality of images can be divided into a plurality of groups of images to be compressed according to the similarity; this maximizes the removal of redundant data for each group relative to data compression in which all images are treated as a group for the same portion; therefore, data volume compression can be carried out through inter-image compression, and meanwhile, the compression efficiency of the inter-image compression can be improved as much as possible through distinguishing the key images and the common images.
Alternatively, as shown in fig. 6, the step S210 may include:
step S211: determining a calculation component, wherein the calculation component is: one or more of the Y component, the V component, and the U component, or the calculation component is: one or more of the Y component, the V component, and the conversion component of the U component;
step S212: determining a difference calculation weight according to the component A;
step S213: obtaining an nth calculation value of the nth image and an n +1 th calculation value of the n +1 th image based on the extracted color difference component and the calculation weight, wherein n is a positive integer;
step S214: carrying out difference calculation on the nth calculation value and the n +1 th calculation value;
step S215: and determining a key image and the common image according to the difference calculation result.
In this embodiment, a calculation component is first determined, where the calculation component may be one or more of a Y component, a U component, and a V component in an image; or one or more of the Y, U, and V components, which may be one or more of the R, B, and G components.
For a channel with m bits, if the value of the A component is 0, the channel is completely transparent; if the value is' 2m-1 "represents opaque; if the value is from 0 to 2mTranslucent is indicated between-1 ". For example, for an 8-bit channel, if the value of the a component is "0", it indicates full transparency; if the value is 255, the opacity is represented; and if the value is between 0 and 255, the semi-transparency is represented.
It is to be noted that, in the embodiment of the present invention, the a component is introduced to determine the weight, and as for the channel with 8 bits, if the component value "0" of the a component indicates full transparency, the corresponding pixel has a color at this time, and the color will not be presented, so the difference calculation weight is set based on the a component in the embodiment, and thus, the effect level that two images are finally presented to the user can be determined by combining the images of the a component to the color component to calculate the similarity or the difference, so that the difference of the images is performed as much as possible.
In this embodiment, if only one component is selected as the calculation component, the Y component may be selected; the Y component can more finely represent the characteristics of the image relative to the U component and/or the V component, if the Y component is selected as a single calculation component, the characteristics of the image can be accurately reflected, and because only one component is introduced to participate in calculation, the calculation amount can be reduced, particularly the calculation amount comprising a large number of pixels is greatly saved.
In some embodiments, a rough estimate of the degree of dissimilarity may be made for both images when selecting one or more of the image data as the calculated components. The estimated difference of the two images is obtained, for example, by a down-sampling or predictive algorithm. The estimated differences may include: at least one of a transparency difference and a color difference is estimated. For example, the down-sampling may include: the individual component values of the pixel are 1/2 sampled, 1/5 sampled or 1/10 sampled. The predictive algorithm may include: mean algorithm, median algorithm and extremum algorithm.
For example, the method may comprise:
and acquiring transparency difference between the images, and if the transparency difference is greater than a transparency difference threshold, selecting more than one component as the calculation component.
In some embodiments, before determining the color difference between the two images, the a components of the two images may be matched, and according to that the transparency difference between the a components of the two images is greater than the transparency difference threshold, at least one of the two images may be directly considered as a key image, and if the transparency difference threshold between the a components of the two images is less than the transparency difference threshold, the two color differences are compared. If the a components of the two images are different, the transparent areas that may be presented to the user are different, and the presentation effect is very different, so in this embodiment, it may be determined, based on the comparison of the a components, whether to select one component or multiple components in one image data for the subsequent distinction between the key image frame and the common image, so that the calculation amount may be greatly reduced.
The estimated transparency difference being greater than the estimated transparency difference threshold may comprise at least one of:
if the two images are overlapped, the distance between the full transparent areas of the two images is larger than the preset distance;
if the two images overlap, the fully transparent regions of the two images are separated by at least one translucent or opaque image subregion.
In order to realize the quick comparison of the A component, the A of partial pixels can be extracted from each image subarea of the two images in a down-sampling mode to be used as a representative for comparison, so that the transparency difference between the two images can be estimated.
In some embodiments, the estimated color difference may be obtained by comparing the color difference of the two images, and if the estimated color difference is small, the estimated transparency difference of the two images may be determined according to the a component.
In some embodiments, the method further comprises:
if the estimated transparency difference between the images is less than the transparency difference threshold, the step of selecting one or more components as calculation components based on the estimated color difference between the two images is carried out.
The selecting one or more components as the calculation components based on the color difference of the two images may include:
obtaining estimated color difference between the images, and if the estimated color difference is larger than a color difference threshold value, selecting two components or three components as calculation components to participate in differential calculation; if the difference is smaller than the color difference threshold, only one component can be selected as a calculation component for difference calculation. The image can be an image or an RGBA image corresponding to the image, etc
Determining the estimated color difference may comprise at least one of;
the mean of the pixel values of the two images is compared,
comparing the median of the pixel values of the two images;
comparing the maximum pixel values of the two images;
comparing the minimum pixel values of the two images;
and according to one or more of the comparison results of the average value, the median value, the maximum pixel value and the minimum pixel value, whether the color difference degree of the two corresponding images is large enough or not is judged. For example, if the difference between the two averages is greater than the predetermined value of the average, the estimated color difference between the two images is considered to be large, otherwise, the estimated color difference between the two images is considered to be small. For another example, if the difference between the two median values is greater than the predetermined median value, the estimated color difference between the two images is considered to be large, otherwise, the estimated color difference between the two images is considered to be small. For example, the comparison between the minimum pixel value and the maximum pixel value is combined to determine whether the color difference degree of the image is large enough; for example, if the comparison value of the minimum pixel value and the comparison value of the maximum pixel value are both greater than the corresponding predetermined values, it can be determined whether the color difference degree of the image is sufficiently large, otherwise it can be determined whether the color difference degree of the image is sufficiently small. If the estimated color difference of the images is large, at least one of the two images is a key image.
In this embodiment, the difference calculation may be performed by comparing pixels of corresponding coordinates in two images one by one to obtain a result of the difference calculation. For example, the Y component of the pixel of the x-th line and the Y-th coordinate of the nth image and the weight corresponding to the a component of the pixel are calculated to obtain a comparison value of the pixel, where the comparison value is one of the above-mentioned calculated values. Calculating the Y component of the pixel of the Y coordinate of the x row of the (n + 1) th image and the weight corresponding to the a component of the pixel to obtain a comparison value of the pixel, where the comparison value is one of the above calculated values of (n + 1), and considering that the pixel values of the two pixels are equal when the difference between the two comparison values is small, for example, in a given interval, so as to count the number of pixels with equal pixel values in the two images, and using the number of pixels as a parameter indicating the difference between the two images, or calculating the ratio of the number of pixels with equal pixel values to the total pixel in one image, and using the ratio to reflect the similarity between the two YUVA images.
In some embodiments, the key image and the normal image may be determined by down-sampling in order to reduce the amount of computation. The sampling frequency at which the calculated value is calculated may be higher than the sampling frequency at which the estimated difference is made.
Optionally, the step S215 may include:
if the calculated value n +1 is outside a preset range compared with the difference calculation result of the calculated value n, the image n +1 is the key image; and/or if the difference calculation result of the n +1 th calculated value compared with the n calculated value is within the preset range, the n +1 th image is the common image.
The result of the difference calculation may be the number of pixels other than the number of pixels having the same pixel value as described above, or may be a ratio of the number of pixels obtained by subtracting the number of pixels having the same pixel value from the total number of pixels of one image. For example, the comparison values of all pixels involved in the calculation may be averaged, and if the average value is within the range of the average value, the similarity between the two images may be considered to be high, and if the average value is outside the range of the average value, the difference between the two images may be considered to be high, and one of the two images is a key image. In short, the result of the difference calculation is within a preset range, which indicates that the difference degree between the two images is large, and the (n + 1) th image is a key image with large difference with the previous image, otherwise, the image can be a normal image.
Optionally, a value range of the component a is: 0 to 2mWherein m is the number of bits of a color channel; the step S212 may include: and carrying out normalization processing on the component A to obtain the weight value calculated by the difference.
In the normalization process, the denominator of normalization may be 2mThus, the value range of the difference calculation weight corresponding to the component a obtained by calculation is as follows: 0 to 1.
In some embodiments, the component a may be directly calculated as the difference calculation weight, indicating that if the component a is directly calculated as the difference calculation weight, the calculation difficulty may be increased due to the large data size involved, and thus the normalized value obtained after the normalization processing of the component a is performed in the above embodiments is used as the difference calculation weight.
Optionally, as shown in fig. 7, the method further includes:
step S200: converting an original RGBA image into an image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
wherein the data amount of the compressed image is smaller than the data amount of the compressed RGBA image.
In the present embodiment, the image is converted from an RGBA image, that is, the RGBA image is an original image; and the image is a converted image.
When the conversion from the image to the RGBA image is carried out, the R component, the B component and the G component in the RGBA image are used for converting into a Y component, a U component and a V component in the image; and the a component in the RGBA image is directly assigned to the a component in the image. The R component, B component, and G component are used for conversion into Y component, U component, and V component in an image, which can be referred to in the related art conversion of an RGB image into a YUV image.
The RGBA image is converted into an image and then is compressed, and compared with the compressed data obtained by directly compressing the RGBA, the data size of the compressed file is smaller, so that the compression amplitude is larger, and the data size of the compressed file is reduced again.
In this embodiment, the image is converted from the RGBA image, and in order to improve processing efficiency, the RGBA image is converted into an image, and simultaneously, difference calculation is directly performed on the RGBA image, and then a key image and a normal image in the image are determined according to a corresponding relationship between the RGBA image and the image. Therefore, the conversion of the image types can be synchronously realized, the key image and the common image can be synchronously determined, and the compression delay is reduced. For example, a first thread is used for converting an RGBA image into an image, and a second thread is used for distinguishing a key image from a common image; the first thread and the second thread may both comprise one or more threads; but the first thread and the second thread are different, so that the rapid compression of the compressed file can be promoted through parallel computing.
Therefore, in some embodiments, the step S211 may include: extracting one or more of the R component, the G component, and the B component in the RGBA image;
the step S212 may include: determining a difference calculation weight according to the component A;
the step S213 may include: obtaining an nth calculation value of the nth RGBA image and an nth +1 calculation value of the n +1 RGBA images based on the extracted color difference components and the calculation weight, wherein n is a positive integer;
the step S214 may include: carrying out difference calculation on the nth calculation value and the n +1 th calculation value;
the step S215 may include: and determining a key image and the common image according to the difference calculation result.
In this embodiment, specifically, whether one or more of the R component, the G component, and the B component are selected may be determined according to the color difference. If the color difference is small, the difference calculation can be performed by only one color component.
Is calculated differentially, thereby reducing the amount of calculation. In the present embodiment, the difference calculation is performed by combining the a component with the one color difference component, so that not only the color component but also the a component are taken into consideration.
Performing the difference calculation in step S114 may include: and subtracting the nth calculation value and the n +1 th calculation value to obtain a difference calculation result. Comparing the result of the difference calculation with
And further, the plurality of images are images of a sequence frame animation, or the RGBA images are images of a sequence frame animation. If the images are images of a sequence animation, the images have larger content relevance, so that the data volume can be greatly compressed by utilizing the compression technology, and the storage resources and/or bandwidth resources occupied by the data volume are reduced.
In addition, in the embodiment of the present invention, the image and the RGBA image may be transparent images at least partially transparent, so that the data amount may be compressed to the maximum extent by using the above compression method, and unnecessary waste of storage resources and transmission resources may be reduced.
Optionally, the step S220 may include:
and performing video compression on the image by adopting a VP9 encoding mode according to the key image and the common image.
In this embodiment, a VP9 encoding method is adopted to perform video compression on the image characteristics, so that a Webm video compression file can be obtained, the data size of the file and the time required for compression can be well balanced, and the encoding rate can be increased while the data size is reduced as much as possible.
As shown in fig. 11, the present embodiment provides an electronic apparatus including:
a memory;
and the processor is connected with the memory and used for realizing one or more information processing methods provided by one or more technical schemes applied to the second private network, the database and the first private network by executing the computer-executable instructions on the memory, for example, one or more of the image information processing methods shown in fig. 1 to 2, 5 to 7 and 8 to 10.
The memory can be various types of memories, such as random access memory, read only memory, flash memory, and the like. The memory may be used for information storage, e.g., storing computer-executable instructions, etc. The computer-executable instructions may be various program instructions, such as object program instructions and/or source program instructions, and the like.
The processor may be various types of processors, such as a central processing unit, a microprocessor, a digital signal processor, a programmable array, a digital signal processor, an application specific integrated circuit, or an image processor, among others.
The processor may be connected to the memory via a bus. The bus may be an integrated circuit bus or the like.
In some embodiments, the terminal device may further include: a communication interface, which may include: a network interface, e.g., a local area network interface, a transceiver antenna, etc. The communication interface is also connected with the processor and can be used for information transceiving.
In some embodiments, the terminal device further comprises a human-computer interaction interface, for example, the human-computer interaction interface may comprise various input and output devices, such as a keyboard, a touch screen, and the like.
The present embodiments provide a computer storage medium having stored thereon computer-executable instructions; the computer-executable instructions, when executed, enable one or more image information processing methods provided by the subject technology, for example, one or more of the methods shown in fig. 1, fig. 2, and fig. 5-10.
The computer storage medium may be various recording media including a recording function, for example, various storage media such as a CD, a floppy disk, a hard disk, a magnetic tape, an optical disk, a usb disk, or a removable hard disk. Optionally, the computer storage medium may be a non-transitory storage medium, and the computer storage medium may be readable by a processor, so that after the computer executable instructions stored in the computer storage mechanism are acquired and executed by the processor, the information processing method provided by any one of the foregoing technical solutions can be implemented, for example, the information processing method applied to the terminal device or the information processing method applied to the application server is executed.
The present embodiments also provide a computer program product comprising computer executable instructions; the computer-executable instructions, when executed, enable one or more of the image information processing methods provided by the foregoing aspects, for example, one or more of the methods shown in fig. 1, fig. 2, and fig. 5 to fig. 10.
Including a computer program tangibly embodied on a computer storage medium, the computer program including program code for performing the method illustrated in the flow chart, the program code may include instructions corresponding to performing the steps of the method provided by embodiments of the present invention. The program product may be various applications or software development kits, etc.
Any of the above embodiments in combination below provide several specific examples:
example 1:
as shown in fig. 8, the present example provides an image information processing method including: decompression, image conversion, coding, buffering, compression and the like. In some application scenarios, the decompression is not necessary.
The decompressing may include:
reading a compressed file, for example, a compressed file compressed by using a video compression technique in the foregoing embodiment;
in order to decompress the read compressed file, a plurality of images are obtained, and image 1, image 2, image 3, and the like are displayed in fig. 8.
If the file needs to be subjected to video compression, the subsequent steps of encoding, caching, compressing and the like are carried out. In some embodiments, the image transformation may be an unnecessary step, e.g., if the image is originally stored, then no image transformation is needed.
The image conversion, encoding and buffering and compression may include:
reading a file, wherein the read file can be a read image file, and specifically can be an RBGA image as shown in fig. 8;
copying RGBA to perform difference calculation, wherein the difference calculation is based on pixel granularity, for example, performing difference calculation (or based on downsampling difference calculation) one by one on corresponding position pixels in two images, wherein if the difference calculation is larger than a threshold value, (Y) the image is considered as a key image, and if the difference calculation is not (N) the image is represented as a non-key image;
the RBGA image is converted into an image, and the components of the image may be as shown in fig. 8, where "0" denotes a blank component, and both the aforementioned first blank component and second blank component may be collectively referred to as a blank component.
In the encoding process, encoding is initialized first. The encoded image frame may be: the video coding is performed according to the key image and the common image, wherein the key image coding can correspond to a key frame in the video, and the common image can correspond to a common frame in the video.
The video header is encoded after the encoding initialization, and the video header may include: video parameters, such as the number of image Frames of a video included in the video, the width, height, video format, and frame rate Per Second (FPS) of the image Frames.
When encoding an image frame, a frame header and buffer data are formed, and the frame header may include: the parameter of the image frame indicates, for example, whether the image frame is a key frame corresponding to a key image, a normal frame corresponding to a normal image, or the like. The cached data may include image data, e.g., image data for an image, which may include: component values of the Y component, the A component, the U component, the V component, the first null component, and the second null component.
If the encoding of all the pictures is completed, encoding the end of the video file may also include: frame header + video data.
And finally, performing video compression. The compression process may include: compression initialization, compression, writing compressed files, for example, writing compressed files to a hard disk, thereby reducing storage resources consumed by storage. In some cases further comprising: and sending the compressed file. The compressed file can be carried in an installation package of an application program of the image application and issued to a receiving end, or can be stored in a server and issued based on a request of the application program in the terminal equipment. In other cases, the method may further comprise: and periodically transmitting the updated compressed file to the terminal equipment where the application program is located.
The use scenario is as follows: a plurality of videos need to be played simultaneously, and each human face sequence frame animation needs to be decoded by a matched video; and (3) encoding: advantages of using Webm coding: the occupied space is small, and the load of the server is reduced.
And (3) decoding:
the first method is as follows: real-time video decoding compressed files or encoding video compressed files.
The second method comprises the following steps: storing the raw image data on a hard disk. Multiple threads decode multiple compressed files.
And the CPU is in an idle period and encodes the original image data into a small file. For example, when the CPU utilization is below a certain threshold, encoding raw image data, for example, JPEG compresses the raw image data, saving consumption of storage resources.
Image coding: for image coding with real image including transparency component, for example, WebP coding adopts YUV + alpha (A) mode, and balances coding time and file size, and jpg compression is used.
Example 2:
as shown in fig. 9, the present example provides an image information processing method, and the image information may include: image data of various images. The method may comprise:
reading the compressed file;
the compressed file is decompressed to obtain a decompressed file 1, a decompressed file 2 and a decompressed file 3, and the implementation is not limited to 3 decompressed files.
The caching operation may include: a video file header, a picture frame header, and buffer data (which may include the original picture data).
The encoding operation may include:
initializing codes;
encoding the image frame;
and judging whether the image is the last image or not, and if not, returning to the cache operation.
The image conversion may include:
obtaining an image by encoding the image frame;
removing blank components from the image to obtain a gray level image;
the raw image data is stored, or alternatively, JPEG compression stores JPEG format compressed data.
And if the current CPU occupancy rate is smaller, reading the original image data from the hard disk and carrying out JPEG compression.
Example 3:
as shown in fig. 10, the present example provides a playback schematic method of a sequence frame animation, including:
requesting an image file sequence, wherein the sequence frame animation is formed by switching images according to the sequence, so that the images are stored in the equipment according to the sequence form in some cases, and the image file sequence is formed;
reading an image file from a hard disk;
and judging whether a JPEG image exists, if yes, reading the JPEG image file, and if not, reading the original image file, wherein the original image file may be an image with blank components removed.
Converting the image into an RGBA image;
the result of the file mapping is performed, for example, the image is mapped to the RGBA image, and thus, the playback order of the RGBA image can be known.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (14)

1. An image information processing method characterized by comprising:
restoring the images in the compressed file compressed by the video compression technology before playing the sequence frame animation;
storing the image;
when the sequence frame animation is played, reading the images and sequentially playing the images according to the playing sequence of the images;
the method further comprises the following steps:
acquiring label information of the sequence frame animation; the label information is set according to the use frequency of the sequence frame animation;
the storing the image comprises:
and if the label information indicates that the sequence frame animation is a common sequence frame animation or a basic sequence frame animation, storing the image in a memory.
2. The method of claim 1, further comprising:
if the current load rate is smaller than the preset load rate, compressing the image to obtain compressed data;
the storing the image comprises:
and storing the compressed data.
3. The method of claim 2,
if the current load rate is smaller than the preset load rate, compressing the image to obtain compressed data, including:
and if the current load rate is less than the preset load rate, compressing the image by using a JPEG compression technology.
4. The method of claim 1, wherein the image is a YUVA image, and wherein the YUVA image comprises: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal 1/2 to the data amount of the a component.
5. The method of claim 4, wherein the storing the image comprises:
storing the YUVA image;
or the like, or, alternatively,
a Y component, a U component, a V component, and an A component of the YUVA image.
6. The method of claim 4, further comprising:
converting the YUVA image into an RGBA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
the storing the image comprises:
storing the RGBA image;
alternatively, the first and second electrodes may be,
storing a compressed file of the RGBA image.
7. An image information processing apparatus characterized by comprising:
the restoring module is used for restoring the images in the compressed file compressed by the video compression technology before the sequence frame animation is played;
a storage module for storing the image;
the playing module is used for reading the images and playing the images in sequence according to the playing sequence of the images when the sequence frame animation is played;
the storage module is further used for acquiring label information of the sequence frame animation; the label information is set according to the use frequency of the sequence frame animation;
the storage module is specifically configured to store the image in a memory if the tag information indicates that the sequence frame animation is a common sequence frame animation or a basic sequence frame animation.
8. The apparatus of claim 7, further comprising:
the compression module is used for compressing the image to obtain compressed data if the current load rate is smaller than a preset load rate;
the storage module is specifically configured to store the compressed data.
9. The apparatus of claim 8,
the compression module is specifically configured to compress the image by using a JPEG compression technique if the current load rate is smaller than the preset load rate.
10. The apparatus of claim 7, wherein the image is a YUVA image, and wherein the YUVA image comprises: a brightness Y component, a first color difference U component, a second color difference V component, a transparency A component, a first blank component and a second blank component; the data amount of the Y component is equal to the A component; the sum of the data amounts of the U component and the first blank component is equal to 1/2 the data amount of the a component; the sum of the data amounts of the V component and the second blank component is equal 1/2 to the data amount of the a component.
11. The apparatus according to claim 10, wherein the storage module is specifically configured to store the YUVA image; or, a Y component, a U component, a V component, and an a component of the YUVA image.
12. The apparatus of claim 10, further comprising:
a conversion module configured to convert the yuba image into an RGBA image, wherein the RGBA image comprises: a red R component, a green G component, a blue B component, and a transparency a component;
the storage module is specifically configured to store the RGBA image; alternatively, a compressed file of the RGBA image is stored.
13. An electronic device, comprising:
a memory;
a processor coupled to the memory for enabling the method provided by any of claims 1 to 6 by executing computer executable instructions located on the memory.
14. A computer storage medium having stored thereon computer-executable instructions; the computer-executable instructions, when executed, enable the method provided by any of claims 1 to 6 to be carried out.
CN201810559284.6A 2018-06-01 2018-06-01 Image information processing method and device, and storage medium Active CN108668170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810559284.6A CN108668170B (en) 2018-06-01 2018-06-01 Image information processing method and device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810559284.6A CN108668170B (en) 2018-06-01 2018-06-01 Image information processing method and device, and storage medium

Publications (2)

Publication Number Publication Date
CN108668170A CN108668170A (en) 2018-10-16
CN108668170B true CN108668170B (en) 2021-07-02

Family

ID=63775303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810559284.6A Active CN108668170B (en) 2018-06-01 2018-06-01 Image information processing method and device, and storage medium

Country Status (1)

Country Link
CN (1) CN108668170B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070867A (en) * 2019-06-11 2020-12-11 腾讯科技(深圳)有限公司 Animation file processing method and device, computer readable storage medium and computer equipment
CN113075993B (en) * 2021-04-09 2024-02-13 杭州华橙软件技术有限公司 Video display method, device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101197770A (en) * 2007-10-09 2008-06-11 深圳市丕微科技企业有限公司 Method for transmitting multimedia data by aid of network
CN101710992A (en) * 2009-11-16 2010-05-19 乐视网信息技术(北京)股份有限公司 Pre-decoding high definition player and playing method
CN104240739A (en) * 2014-09-04 2014-12-24 广东欧珀移动通信有限公司 Music playing method and device for mobile terminal
CN106375759A (en) * 2016-08-31 2017-02-01 深圳超多维科技有限公司 Video image data coding method and device, and video image data decoding method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040218669A1 (en) * 2003-04-30 2004-11-04 Nokia Corporation Picture coding method
CN1281063C (en) * 2003-12-23 2006-10-18 无敌科技(西安)有限公司 Cartoon quick condensing and decondensing method
GB2441365B (en) * 2006-09-04 2009-10-07 Nds Ltd Displaying video data
US8363729B1 (en) * 2008-11-06 2013-01-29 Marvell International Ltd. Visual data compression algorithm with parallel processing capability
JP2015511458A (en) * 2012-02-07 2015-04-16 ヒルシュマン カー コミュニケーション ゲゼルシャフト ミット ベシュレンクテル ハフツングHirschmann Car Communication GmbH How to quickly switch between multiple alternative transmission paths

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101197770A (en) * 2007-10-09 2008-06-11 深圳市丕微科技企业有限公司 Method for transmitting multimedia data by aid of network
CN101710992A (en) * 2009-11-16 2010-05-19 乐视网信息技术(北京)股份有限公司 Pre-decoding high definition player and playing method
CN104240739A (en) * 2014-09-04 2014-12-24 广东欧珀移动通信有限公司 Music playing method and device for mobile terminal
CN106375759A (en) * 2016-08-31 2017-02-01 深圳超多维科技有限公司 Video image data coding method and device, and video image data decoding method and device

Also Published As

Publication number Publication date
CN108668170A (en) 2018-10-16

Similar Documents

Publication Publication Date Title
US11132818B2 (en) Predicting attributes for point cloud compression according to a space filling curve
WO2021068598A1 (en) Encoding method and device for screen sharing, and storage medium and electronic equipment
EP4022903A1 (en) Block-based predictive coding for point cloud compression
JP4698739B2 (en) Image compression for computer graphics
JP2020174374A (en) Digital image recompression
US9609338B2 (en) Layered video encoding and decoding
JP2008541503A (en) Remote display processing method based on server / client structure
EP3410302B1 (en) Graphic instruction data processing method, apparatus
US8760366B2 (en) Method and system for remote computing
CN102156611A (en) Method and apparatus for creating animation message
CN111131828B (en) Image compression method and device, electronic equipment and storage medium
CN108668170B (en) Image information processing method and device, and storage medium
CN108668169B (en) Image information processing method and device, and storage medium
EP2843954B1 (en) Lossy color compression using adaptive quantization
WO2022095797A1 (en) Image compression method and apparatus, and intelligent terminal and computer-readable storage medium
US10250892B2 (en) Techniques for nonlinear chrominance upsampling
CN111669595A (en) Screen content coding method, device, equipment and medium
CN111526366B (en) Image processing method, image processing apparatus, image capturing device, and storage medium
US20220114761A1 (en) Decoding data arrays
CN101065760B (en) System and method for processing image data
US20230262210A1 (en) Visual lossless image/video fixed-rate compression
US6829390B2 (en) Method and apparatus for transmitting image updates employing high compression encoding
CN116708793B (en) Video transmission method, device, equipment and storage medium
US10002586B1 (en) Compression of display data stored locally on a GPU
CN110830744B (en) Safety interaction system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant