CN115459780A - Data compression method, data decompression method and related equipment - Google Patents

Data compression method, data decompression method and related equipment Download PDF

Info

Publication number
CN115459780A
CN115459780A CN202211156353.1A CN202211156353A CN115459780A CN 115459780 A CN115459780 A CN 115459780A CN 202211156353 A CN202211156353 A CN 202211156353A CN 115459780 A CN115459780 A CN 115459780A
Authority
CN
China
Prior art keywords
feature
features
key
block
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211156353.1A
Other languages
Chinese (zh)
Inventor
王立传
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202211156353.1A priority Critical patent/CN115459780A/en
Publication of CN115459780A publication Critical patent/CN115459780A/en
Priority to PCT/CN2023/120392 priority patent/WO2024061316A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention discloses a data compression method, a data decompression method and related equipment, and relates to the field of data processing. The data compression method comprises the following steps: processing the data sequence by using a feature extraction network to obtain a feature sequence; determining every other first number of features in the feature sequence as key features, and determining other features as non-key features; processing the key features to obtain a coding result of the key features; obtaining an encoding result of the non-key features according to reference features of the non-key features, wherein the reference features are the pre-set number of features of the non-key features; and sending the information of the feature extraction network and the coding result of the feature sequence to a receiving end so that the receiving end can obtain a data sequence through decoding and feature reconstruction. The invention can improve the compression ratio of the data sequence.

Description

Data compression method, data decompression method and related equipment
Technical Field
The present invention relates to the field of data processing, and in particular, to a data compression method, a data decompression method, and a related device.
Background
Feature compression is an emerging area of technology. After the device acquires the image or the video, the device acquires the characteristic of the representation image or the video by adopting the extraction network. And after the characteristics are quantized and entropy-coded, the coding result is sent to a server side. And the server side restores the characteristics after entropy decoding and inverse quantization. The recovered features may be used for other artificial intelligence tasks such as continuing to recover into images, or performing object detection, etc.
Disclosure of Invention
The inventor finds out after analysis that, when the compression of serialized data such as video is processed, the video features are equivalent to image features, and the time-domain correlation of the features is ignored. The compression is relatively low.
The embodiment of the invention aims to solve the technical problem that: how to increase the compression ratio of data.
According to a first aspect of some embodiments of the present invention, there is provided a method of data compression, comprising: processing the data sequence by using a feature extraction network to obtain a feature sequence; determining every other first number of features in the feature sequence as key features, and determining other features as non-key features; processing the key features to obtain a coding result of the key features; obtaining an encoding result of the non-key features according to reference features of the non-key features, wherein the reference features are the pre-set number of features of the non-key features; and sending the information of the feature extraction network and the coding result of the feature sequence to a receiving end so that the receiving end can obtain a data sequence through decoding and feature reconstruction.
In some embodiments, each feature in the sequence of features comprises a plurality of blocks.
In some embodiments, each feature in the sequence of features is three-dimensional data comprising a plurality of two-dimensional data blocks arranged in a predetermined dimension.
In some embodiments, each feature in the sequence of features is in a CHW (channel-height-width) format, and the predetermined dimension is the dimension in which the C direction is located.
In some embodiments, obtaining the encoding result of the non-critical feature from the reference feature of the non-critical feature comprises: for each block of non-critical features, calculating a difference between the block and a co-located block of the reference feature; and entropy coding the difference value corresponding to each block of the non-key features to obtain the coding result of the non-key features.
In some embodiments, obtaining the coding result of the non-critical features from the reference features of the non-critical features comprises: for each block of non-critical features: calculating a difference between the block and each block of the reference feature; entropy coding is carried out on the minimum difference value and the position information of the block of the reference characteristic corresponding to the minimum difference value to obtain the coding information of the block; and determining the coding result of the non-key features according to the coding information of each block of the non-key features.
In some embodiments, obtaining the encoding result of the non-critical feature from the reference feature of the non-critical feature comprises: for each block of non-critical features: calculating a difference between the block and each block in the C direction of the reference feature; entropy coding is carried out on the minimum difference and the position information of the reference characteristic block corresponding to the minimum difference in the direction C to obtain the coding information of the block; and determining the coding result of the non-key features according to the coding information of each block of the non-key features.
In some embodiments, processing the key features, and obtaining the encoding result of the key features includes: and entropy coding is carried out on the key features to obtain the coding result of the key features.
In some embodiments, processing the key feature, and obtaining the encoding result of the key feature includes: entropy coding the first second number of blocks of the key feature; for each uncoded block, an encoding result of the uncoded block is obtained from the encoded blocks of the key features.
In some embodiments, the method of data compression further comprises: for each feature in the feature sequence, after obtaining the encoding result of the feature, adding the feature into a buffer, wherein the feature in the buffer is used as a reference feature; in the case where the number of features in the buffer reaches a preset value, the feature added earliest in the buffer is deleted before the feature is added to the buffer.
In some embodiments, processing the data sequence using the feature extraction network, and obtaining the feature sequence includes: extracting a sequence of original features from the data sequence by using a feature extraction network; each feature in the original sequence of features is at least one of transformed or quantized to obtain a sequence of features including integer features.
In some embodiments, the data sequence is one of RAW data in Bayer format, RAW RGB (red-green-blue) data, RGB data, YUV (luminance-chrominance-density) format data, pure Y (luminance) data.
According to a second aspect of some embodiments of the present invention, there is provided a method of data decompression, comprising: acquiring information of a feature extraction network and an encoding result of a feature sequence sent by a sending end, wherein the feature sequence comprises key features and non-key features; entropy decoding the coding result to obtain a decoding result of the key characteristic and a decoding result of the non-key characteristic; obtaining a key feature based on a decoding result of the key feature; processing the decoding result of the non-key feature based on the decoding result of the reference feature of the non-key feature to obtain the non-key feature; and (4) extracting network information according to the key features, the non-key features and the features to recover the features so as to obtain a data sequence.
In some embodiments, the decoding result for each feature comprises an entropy decoding result for each block in the feature.
In some embodiments, processing the decoding result of the non-critical feature based on the decoding result of the reference feature of the non-critical feature, and obtaining the non-critical feature comprises: for each block of non-critical features, calculating the sum of the entropy decoding result of the block and the blocks at the same positions of the reference features of the non-critical features to obtain each block of the non-critical features; non-critical features are determined from respective blocks of non-critical features.
In some embodiments, the decoding result of the non-critical features further includes a correspondence between each block of the non-critical features and a corresponding reference block in the reference features of the non-critical features, and processing the decoding result of the non-critical features based on the decoding result of the reference features of the non-critical features, obtaining the non-critical features includes: for each block of non-critical features, calculating the sum of the entropy decoding result of the block and the corresponding reference block of the reference feature of the non-critical features to obtain each block of non-critical features; non-critical features are determined from the respective blocks of non-critical features.
In some embodiments, obtaining the key feature based on the entropy decoding result of the key feature comprises: for each block of the key features after the first second number, determining a block based on the entropy decoding result of the block and the entropy decoding result of the block before the block; from each block of key features, key features are determined.
In some embodiments, performing feature recovery according to the key features, the non-key features, and the information of the feature extraction network, and obtaining the data sequence includes: at least one of inverse transformation or inverse quantization is carried out on the key features and non-key features of the integer to generate an original feature sequence; and recovering the characteristics of the original characteristic sequence by utilizing the information of the characteristic extraction network to obtain a data sequence.
According to a third aspect of some embodiments of the present invention, there is provided an apparatus for data compression, comprising: the characteristic sequence acquisition module is configured to process the data sequence by utilizing a characteristic extraction network to acquire a characteristic sequence; the characteristic determining module is configured to determine every other first number of characteristics in the characteristic sequence as key characteristics, and determine other characteristics as non-key characteristics; the key feature coding module is configured to process the key features to obtain a coding result of the key features; the non-key feature coding module is configured to obtain a coding result of the non-key features according to reference features of the non-key features, wherein the reference features are the pre-set number of features of the non-key features; and the sending module is configured to send the information of the feature extraction network and the coding result of the feature sequence to the receiving end so that the receiving end obtains the data sequence through decoding and feature reconstruction.
According to a fourth aspect of some embodiments of the present invention, there is provided an apparatus for data decompression, comprising: the device comprises an encoding result acquisition module, a feature extraction module and a feature sequence acquisition module, wherein the encoding result acquisition module is configured to acquire information of a feature extraction network and an encoding result of a feature sequence sent by a sending end, and the feature sequence comprises key features and non-key features; the entropy decoding module is configured to perform entropy decoding processing on the encoding result to obtain a decoding result of the key features and a decoding result of the non-key features; a key feature obtaining module configured to obtain a key feature based on a decoding result of the key feature; a non-key feature obtaining module configured to process a decoding result of the non-key feature based on a decoding result of a reference feature of the non-key feature to obtain a non-key feature; and the characteristic recovery module is configured to recover the characteristics according to the key characteristics, the non-key characteristics and the information of the characteristic extraction network to obtain a data sequence.
According to a fifth aspect of some embodiments of the present invention there is provided a data compression system comprising: means for data compression; and means for data decompression.
According to a sixth aspect of some embodiments of the present invention there is provided an electronic device comprising: a memory; and a processor coupled to the memory, the processor configured to perform any of the foregoing methods of data compression, or any of the foregoing methods of data decompression, based on instructions stored in the memory.
According to a seventh aspect of some embodiments of the present invention, there is provided a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements any one of the foregoing methods of data compression, or any one of the foregoing methods of data decompression.
Some embodiments of the above invention have the following advantages or benefits. The embodiment of the invention divides the features in the feature sequence into key features and non-key features, and uses different coding modes for the key features and the non-key features. Since the non-critical features are closer to the corresponding reference features and the contents of the non-critical features are generally closer to each other, the compression ratio can be greatly increased by encoding the non-critical features based on the reference features. Thus, embodiments of the present invention improve the compression ratio of the data sequence as a whole.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 illustrates a flow diagram of a method of data compression according to some embodiments of the inventions.
FIG. 2 illustrates a flow diagram of a non-critical feature encoding method according to some embodiments of the invention.
FIG. 3 is a flow diagram illustrating a non-critical feature encoding method according to further embodiments of the present invention.
Fig. 4 exemplarily shows a data structure diagram of the CHW format.
FIG. 5 illustrates a flow diagram of a key feature encoding method according to some embodiments of the inventions.
Fig. 6 illustrates a flow diagram of a method of data decompression according to some embodiments of the invention.
Fig. 7 exemplarily shows a flow diagram of compression and decompression.
FIG. 8 illustrates an apparatus for data compression according to some embodiments of the inventions.
Fig. 9 illustrates a block diagram of an apparatus for data decompression according to some embodiments of the invention.
FIG. 10 illustrates a block diagram of a system for data compression according to some embodiments of the invention.
FIG. 11 illustrates a schematic structural diagram of an electronic device according to some embodiments of the inventions.
FIG. 12 shows a schematic diagram of an electronic device according to further embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be discussed further in subsequent figures.
FIG. 1 illustrates a flow diagram of a method of data compression according to some embodiments of the inventions. As shown in fig. 1, the method of data compression of this embodiment includes steps S102 to S110.
In step S102, the data sequence is processed by using the feature extraction network, and a feature sequence is acquired.
The data sequence may be a set of video, consecutive images, or other types of data. The video may be presented in YUV or RAW Bayer format. Further, the data sequence may also be RAW RGB data, pure Y data, or the like.
In some embodiments, each data in the data sequence is processed one by one to obtain corresponding features of each data, and the features form a feature sequence. For example, each image in a video sequence is processed to obtain features of each image, and an image feature sequence is formed.
In some embodiments, features are extracted by Convolutional Neural Networks (CNN).
In some embodiments, a sequence of raw features is extracted from a data sequence using a feature extraction network; each feature in the original sequence of features is at least one of transformed or quantized to obtain a sequence of features including integer features. Therefore, subsequent coding processing is facilitated, and coding efficiency is improved.
In step S104, every first number of features in the feature sequence are determined as critical features, and the other features are determined as non-critical features.
For example, 1 st, 6 th, 11 th, 16 th in the characteristic sequence, 8230, one characteristic is taken as a key characteristic, and other characteristics are taken as non-key characteristics.
In step S106, the key feature is processed to obtain an encoding result of the key feature.
In some embodiments, the key feature is directly entropy-coded to obtain the coding result of the key feature.
In step S108, an encoding result of the non-critical feature is obtained according to the reference feature of the non-critical feature.
In some embodiments, the reference feature is a first predetermined number of features of the non-critical feature, such as the first 1 feature of the non-critical feature.
In some embodiments, for each feature in the sequence of features, after obtaining the encoding result of the feature, adding the feature to a buffer, wherein the feature in the buffer serves as a reference feature; in the case where the number of features in the buffer reaches a preset value, the feature added earliest in the buffer is deleted before the feature is added to the buffer. Therefore, when the non-key features are coded, the reference features can be quickly read from the buffer pool, and the coding efficiency is improved.
Thus, the non-critical features may be encoded based on the reference features, e.g. the encoding result of the non-critical features is the encoding result of the difference information between the non-critical features and the reference features.
In step S110, the information of the feature extraction network and the encoding result of the feature sequence are sent to the receiving end, so that the receiving end obtains the data sequence through decoding and feature reconstruction.
In some embodiments, the information of the feature extraction network includes network type information, parameter information, and the like.
The above-described embodiments divide the features in the feature sequence into key features and non-key features, and use different encoding modes for the two. Since the non-critical features are closer to the corresponding reference features and the contents of the non-critical features are generally closer to each other, the compression ratio can be greatly increased by encoding the non-critical features based on the reference features. Thus, embodiments of the present invention improve the compression ratio of the data sequence as a whole.
In some embodiments, each feature in the sequence of features comprises a plurality of blocks. Thus, block-based encoding can be achieved.
For example, each feature in the sequence of features is three-dimensional data comprising a plurality of two-dimensional data blocks arranged in a predetermined dimension. If desired, non-three-dimensional data may be transformed into three dimensions.
Embodiments of encoding non-critical features are described below with reference to fig. 2 and 3, in which two different feature prediction approaches are employed.
FIG. 2 illustrates a flow diagram of a non-critical feature encoding method according to some embodiments of the invention. As shown in fig. 2, the non-critical feature encoding method of this embodiment includes steps S202 to S204.
In step S202, for each block of non-critical features, the difference between the block and the co-located block of the reference feature is calculated.
In step S204, entropy coding is performed on the difference value corresponding to each block of the non-critical feature, so as to obtain a coding result of the non-critical feature.
This way, calculating the difference between blocks at the same position, it is easier to determine the reference block, and therefore the amount of calculation is smaller.
FIG. 3 is a flow diagram illustrating a method for non-critical feature encoding according to further embodiments of the present invention. As shown in fig. 3, the non-critical feature encoding method of this embodiment includes steps S302 to S306.
In step S302, for each block of non-critical features, a difference between the block and each block of reference features is calculated.
In step S304, for each block of non-critical features, entropy encoding is performed on the minimum difference value and the position information of the block of reference features corresponding to the minimum difference value, so as to obtain the encoded information of the block.
In step S306, the encoding result of the non-critical feature is determined based on the encoding information of each block of the non-critical feature.
In this way, the block with the smallest difference in the reference characteristics can be determined, so that the data amount of the coding can be reduced, and the compression ratio can be further improved.
In some embodiments, each feature in the sequence of features is in a CHW format, and the predetermined dimension is the dimension in which the C direction is located. Fig. 4 schematically shows a data structure of the CHW format. As shown in fig. 4, each block has a size W × H, and the blocks are arranged in the C direction. At this point, for each block of non-critical features: calculating a difference between the block and each block in the C direction of the reference feature; and entropy coding is carried out on the minimum difference and the position information of the reference characteristic block corresponding to the minimum difference in the C direction, so as to obtain the coding information of the block. Therefore, the existing data format can be used for coding the non-reference characteristics, and the compatibility of the compression method is improved.
In some embodiments, other ways than entropy coding can be used for key features. An embodiment of a key feature encoding method is described below with reference to fig. 5.
FIG. 5 illustrates a flow diagram of a key feature encoding method according to some embodiments of the inventions. As shown in fig. 5, the key feature encoding method of this embodiment includes steps S502 to S504.
In step S502, the first second number of blocks of the key feature are entropy encoded.
In step S504, for each non-coded block, a coding result of the non-coded block is obtained according to the coded block of the key feature.
This processing approach is similar to that of non-critical features, which is equivalent to finding a block for reference inside a critical feature and encoding based on the block.
In some embodiments, for each uncoded block, a difference between the block and each coded block in the key feature is calculated, and the minimum difference and the position information of the coded block corresponding to the minimum difference are determined as the coding information of the block.
When the key features are coded, the coded blocks of the key features are used for coding uncoded blocks, so that the compression ratio of the key features is improved, and the coding effect of the data sequence is improved on the whole.
After the receiving end obtains the compressed data, it can perform corresponding decompression processing. An embodiment of the data decompression method of the present invention is described below with reference to fig. 6.
Fig. 6 illustrates a flow diagram of a method of data decompression according to some embodiments of the invention. As shown in fig. 6, the method of data decompression of this embodiment includes steps S602 to S610.
In step S602, information of the feature extraction network and an encoding result of the feature sequence sent by the sending end are obtained, where the feature sequence includes a key feature and a non-key feature.
In step S604, entropy decoding processing is performed on the encoding result to obtain a decoding result of the key feature and a decoding result of the non-key feature.
The sender of the data, i.e. the party performing the compression operation, may perform compression in a variety of ways, and the receiver of the data, i.e. the party performing the decompression operation, needs to perform decompression in a corresponding way. The two can negotiate a specific compression-decompression mode in advance, and a specific compression strategy can also be sent to the receiving party by the sending party. The method for decompressing the key features and the non-key features described later is one of many embodiments, and in a specific implementation, a method corresponding to the sender may be selected, which will not be described herein again.
In some embodiments, the entropy decoding result may be subjected to inverse transformation and inverse quantization processes.
In step S606, a key feature is obtained based on the decoding result of the key feature.
In some embodiments, the decoding result of the key feature is the key feature itself.
In some embodiments, for each block of the key feature after the first second number, determining the block based on the entropy decoding result of the block and the entropy decoding result of the block before the block; from each block of the key feature, the key feature is determined, for example by combining the blocks in sequence to obtain the key feature.
For example, the entropy decoding result of the key feature further includes information of a reference block corresponding to each block, where the reference block is a block located before the corresponding block. In some embodiments, the sum of the encoding result and the entropy decoding result of the reference block is determined as information of the corresponding block.
In step S608, the decoding result of the non-critical feature is processed based on the decoding result of the reference feature of the non-critical feature, and the non-critical feature is obtained.
In some embodiments, for each block of the non-critical feature, calculating the sum of the entropy decoding result of the block and the co-located block of the reference feature of the non-critical feature, obtaining each block of the non-critical feature; the non-critical features are determined from individual blocks of the non-critical features, for example by combining the blocks in sequence to obtain the non-critical features.
In some embodiments, the decoding result of the non-critical feature further comprises a correspondence between each block of the non-critical feature and a respective one of the reference blocks of the non-critical feature. For each block of non-critical features, calculating the sum of the entropy decoding result of the block and the corresponding reference block of the reference features of the non-critical features to obtain each block of non-critical features; the non-critical features are determined from individual blocks of the non-critical features, for example by combining the blocks in sequence to obtain the non-critical features.
In step S610, feature recovery is performed according to the key features, the non-key features and the information of the feature extraction network, so as to obtain a data sequence.
In some embodiments, at least one of inverse transformation or inverse quantization is performed on the integer key features and non-key features (corresponding to the transformation or quantization in the compression process), so as to generate an original feature sequence; and recovering the characteristics of the original characteristic sequence by utilizing the information of the characteristic extraction network to obtain a data sequence.
Through the embodiment, the receiving end can accurately restore the data sequence based on a smaller data amount, and network transmission resources are saved.
Fig. 7 exemplarily shows a flow diagram of compression and decompression. As shown in fig. 7, at a sending end, a video extracts an original feature sequence F through a CNN network, and then the F is subjected to DCT transformation and quantization to obtain an integer feature sequence I (i.e., a quantization coefficient). Directly performing entropy coding on key features in the I; and for the non-key features in the I, calculating the difference delta between the non-key features and the reference features in a feature prediction mode, and carrying out entropy coding on the delta. The result of the encoding of each feature in I constitutes a Bit stream. At the receiving end, the Bit stream is entropy decoded. For key features, the entropy decoding result is the feature itself; for non-critical features, the sum of the decoding result and the reference feature is calculated to obtain the non-critical features. The key features and non-key features constitute an integer feature sequence I. And then, carrying out inverse quantization and DCT inverse transformation on the I to obtain a characteristic sequence F. And performing characteristic reconstruction by using the parameter of the CNN network to obtain original video data.
Through the embodiment, the sending end can firstly transform and quantize the extracted features to obtain integer features, so that the coding efficiency is improved, and meanwhile, the key features and the non-key features are respectively processed, so that the compression ratio is improved on the premise of reducing information loss. Correspondingly, the key features and the non-key features are respectively restored at the receiving end by using corresponding means, and then inverse quantization and inverse transformation are carried out, so that the original features are obtained. The original data sequence is accurately restored by performing feature reconstruction according to the network parameters used in the feature extraction stage.
An embodiment of the apparatus for data compression of the present invention is described below with reference to fig. 8.
FIG. 8 illustrates an apparatus for data compression according to some embodiments of the inventions. As shown in fig. 8, the apparatus 800 for data compression of this embodiment includes: the characteristic sequence acquisition module 8100 is configured to process the data sequence by using a characteristic extraction network to acquire a characteristic sequence; a feature determination module 8200 configured to determine every second number of features in the sequence of features as critical features and every other feature as non-critical features; a key feature encoding module 8300 configured to process the key feature to obtain an encoding result of the key feature; a non-critical feature encoding module 8400 configured to obtain an encoding result of the non-critical features according to reference features of the non-critical features, where the reference features are a preset number of features in front of the non-critical features; a sending module 8500 configured to send the information of the feature extraction network and the coding result of the feature sequence to the receiving end, so that the receiving end obtains the data sequence through decoding and feature reconstruction.
In some embodiments, each feature in the sequence of features comprises a plurality of blocks.
In some embodiments, each feature in the sequence of features is three-dimensional data comprising a plurality of two-dimensional data blocks arranged in a predetermined dimension.
In some embodiments, each feature in the sequence of features is in a CHW format, and the predetermined dimension is the dimension in which the C direction is located.
In some embodiments, the non-critical feature encoding module 8400 is further configured to, for each block of non-critical features, calculate a difference between the block and a co-located block of the reference feature; and entropy coding the difference value corresponding to each block of the non-key features to obtain the coding result of the non-key features.
In some embodiments, the non-critical feature encoding module 8400 is further configured to, for each block of non-critical features: calculating a difference between the block and each block of the reference feature; entropy coding is carried out on the minimum difference value and the position information of the block of the reference characteristic corresponding to the minimum difference value to obtain the coding information of the block; and determining the coding result of the non-key features according to the coding information of each block of the non-key features.
In some embodiments, the non-critical feature encoding module 8400 is further configured to, for each block of non-critical features: calculating a difference between the block and each block in the C direction of the reference feature; entropy coding is carried out on the minimum difference and the position information of the reference characteristic block corresponding to the minimum difference in the direction C to obtain the coding information of the block; and determining the coding result of the non-key features according to the coding information of each block of the non-key features.
In some embodiments, the key feature encoding module 8300 is further configured to perform entropy encoding on the key feature to obtain an encoding result of the key feature.
In some embodiments, the key feature encoding module 8300 is further configured to entropy encode the first second number of blocks of the key feature; for each uncoded block, obtaining an encoding result of the uncoded block according to the encoded blocks of the key features.
In some embodiments, the data sequence is RAW data in Bayer format, or data in YUV format.
An embodiment of the inventive apparatus for data decompression is described below with reference to fig. 9.
Fig. 9 illustrates a block diagram of an apparatus for data decompression according to some embodiments of the invention. As shown in fig. 9, the apparatus 900 for data decompression of this embodiment includes: an encoding result obtaining module 9100, configured to obtain information of a feature extraction network and an encoding result of a feature sequence sent by a sending end, where the feature sequence includes a key feature and a non-key feature; an entropy decoding module 9200 configured to perform entropy decoding processing on the encoding result to obtain a decoding result of the key feature and a decoding result of the non-key feature; a key feature obtaining module 9300 configured to obtain a key feature based on a decoding result of the key feature; a non-critical feature obtaining module 9400 configured to process the decoding result of the non-critical feature based on the decoding result of the reference feature of the non-critical feature to obtain a non-critical feature; and the feature recovery module 9500 is configured to perform feature recovery according to the key features, the non-key features and the information of the feature extraction network to obtain a data sequence.
In some embodiments, the decoding result for each feature comprises an entropy decoding result for each block in the feature.
In some embodiments, the non-critical feature obtaining module 9400 is further configured to, for each block of non-critical features, calculate a sum of entropy decoding results of the block and co-located blocks of reference features of the non-critical features, obtaining each block of non-critical features; non-critical features are determined from respective blocks of non-critical features.
In some embodiments, the decoding result of the non-critical features further comprises a correspondence between each block of non-critical features and a respective one of the reference features of the non-critical features, and the non-critical feature obtaining module 9400 is further configured to, for each block of non-critical features, calculate a sum of the entropy decoding result of the block and the respective reference blocks of the reference features of the non-critical features, obtaining each block of non-critical features; non-critical features are determined from the respective blocks of non-critical features.
In some embodiments, the key feature obtaining module 9300 is further configured to determine, for each block of the key feature after the first second number, a block based on the entropy decoding result of the block and the entropy decoding result of the block before the block; from each block of key features, key features are determined.
FIG. 10 illustrates a block diagram of a system for data compression according to some embodiments of the invention. As shown in fig. 10, the system 100 for data compression of this embodiment includes an apparatus 800 for data compression and an apparatus 900 for data decompression.
In some embodiments, the data compressing apparatus 800 and the data decompressing apparatus 900 are respectively located at a sending end and a receiving end of data, and the sending end and the receiving end are connected through a network.
FIG. 11 illustrates a schematic structural diagram of an electronic device according to some embodiments of the inventions. As shown in fig. 11, the electronic apparatus 110 of this embodiment includes: a memory 1110 and a processor 1120 coupled to the memory 1110, the processor 1120 being configured to perform a method of data compression in any of the embodiments described above or a method of data decompression in any of the embodiments described above based on instructions stored in the memory 1110.
Memory 1110 may include, for example, system memory, fixed non-volatile storage media, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), and other programs.
FIG. 12 shows a schematic diagram of an electronic device according to further embodiments of the present invention. As shown in fig. 12, the electronic apparatus 120 of this embodiment includes: the memory 1210 and the processor 1220 may further include an input/output interface 1230, a network interface 1240, a storage interface 1250, and the like. These interfaces 1230, 1240, 1250, as well as the memory 1210 and the processor 1220, may be coupled via a bus 1260, for example. The input/output interface 1230 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, and a touch screen. The network interface 1240 provides a connection interface for a variety of networking devices. The storage interface 1250 provides a connection interface for external storage devices such as an SD card and a usb disk.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, wherein the program is configured to implement the method for data compression in any of the foregoing embodiments, or the method for data decompression in any of the foregoing embodiments when executed by a processor.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit and scope of the present invention.

Claims (23)

1. A method of data compression, comprising:
processing the data sequence by using a feature extraction network to obtain a feature sequence;
determining every other first number of features in the feature sequence as key features, and determining other features as non-key features;
processing the key features to obtain a coding result of the key features;
obtaining an encoding result of the non-key features according to reference features of the non-key features, wherein the reference features are a preset number of features of the non-key features;
and sending the information of the feature extraction network and the coding result of the feature sequence to a receiving end so that the receiving end can obtain the data sequence through decoding and feature reconstruction.
2. The method of claim 1, wherein each feature in the sequence of features comprises a plurality of blocks.
3. The method of claim 2, wherein each feature in the sequence of features is three-dimensional data comprising a plurality of two-dimensional data blocks arranged in a preset dimension.
4. The method of claim 3, wherein each feature in the sequence of features is in a CHW format, and the predetermined dimension is a dimension in which the C direction is located.
5. The method according to any one of claims 2 to 4, wherein the obtaining the encoding result of the non-critical feature according to the reference feature of the non-critical feature comprises:
for each block of the non-critical feature, calculating a difference between the block and a co-located block of the reference feature;
and entropy coding the difference value corresponding to each block of the non-key features to obtain a coding result of the non-key features.
6. The method according to any one of claims 2 to 4, wherein the obtaining the encoding result of the non-critical feature according to the reference feature of the non-critical feature comprises:
for each block of the non-critical features:
calculating a difference between the block and each block of the reference feature;
entropy coding is carried out on the minimum difference value and the position information of the block of the reference characteristic corresponding to the minimum difference value, and coding information of the block is obtained;
and determining the coding result of the non-key features according to the coding information of each block of the non-key features.
7. The method according to claim 4, wherein the obtaining the encoding result of the non-critical feature according to the reference feature of the non-critical feature comprises:
for each block of the non-critical features:
calculating a difference between the block and each block in a channel C direction of the reference feature;
entropy coding is carried out on the minimum difference and the position information of the reference feature block corresponding to the minimum difference in the direction C, and coding information of the block is obtained;
and determining the coding result of the non-key features according to the coding information of each block of the non-key features.
8. The method of claim 1, wherein the processing the key feature and obtaining the encoding result of the key feature comprises:
and entropy coding is carried out on the key features to obtain the coding result of the key features.
9. The method according to any one of claims 2 to 4, wherein the processing the key feature to obtain the encoding result of the key feature comprises:
entropy encoding the first second number of blocks of the key feature;
for each uncoded block, obtaining an encoding result of the uncoded block according to the encoded block of the key feature.
10. The method of claim 1, further comprising:
for each feature in the feature sequence, after obtaining an encoding result of the feature, adding the feature into a buffer, wherein the feature in the buffer is used as a reference feature;
in the event that the number of features in the buffer reaches a preset value, deleting the least recently added feature in the buffer before adding the feature to the buffer.
11. The method of claim 1, wherein the processing the data sequence using the feature extraction network to obtain the feature sequence comprises:
extracting a sequence of original features from the data sequence by using a feature extraction network;
at least one of transforming or quantizing each feature in the original sequence of features to obtain a sequence of features including integer features.
12. The method of claim 1, wherein the data sequence is one of RAW data in Bayer format, RAW red-green-blue RAW RGB data, luminance-chrominance-concentration YUV format data, pure luminance Y data.
13. A method of data decompression, comprising:
acquiring information of a feature extraction network and an encoding result of the feature sequence, wherein the information of the feature extraction network is sent by a sending end, and the feature sequence comprises key features and non-key features;
performing entropy decoding processing on the encoding result to obtain a decoding result of the key feature and a decoding result of the non-key feature;
obtaining the key feature based on the decoding result of the key feature;
processing the decoding result of the non-key feature based on the decoding result of the reference feature of the non-key feature to obtain the non-key feature;
and recovering the characteristics according to the key characteristics, the non-key characteristics and the information of the characteristic extraction network to obtain a data sequence.
14. The method of claim 13, wherein the decoding result for each feature comprises an entropy decoding result for each block in the feature.
15. The method according to claim 14, wherein the processing the decoding result of the non-critical feature based on the decoding result of the reference feature of the non-critical feature to obtain the non-critical feature comprises:
for each block of the non-critical features, calculating the sum of entropy decoding results of the block and co-located blocks of reference features of the non-critical features to obtain each block of the non-critical features;
determining the non-critical features from the respective blocks of non-critical features.
16. The method according to claim 14, wherein the decoding result of the non-critical features further includes a correspondence between each block of the non-critical features and a corresponding reference block in the reference features of the non-critical features, and the processing the decoding result of the non-critical features based on the decoding result of the reference features of the non-critical features to obtain the non-critical features comprises:
for each block of the non-critical features, calculating the sum of the entropy decoding result of the block and the corresponding reference block of the reference features of the non-critical features, obtaining each block of the non-critical features;
determining the non-critical features from the respective blocks of non-critical features.
17. The method of claim 14, wherein the obtaining the key feature based on the entropy decoding result of the key feature comprises:
for each block of the key feature after the first second number, determining the block according to the entropy decoding result of the block and the entropy decoding result of the block before the block;
determining the key feature from each block of the key feature.
18. The method of claim 13, wherein the performing feature recovery according to the key features, the non-key features, and the information of the feature extraction network, and obtaining a data sequence comprises:
performing at least one of inverse transformation or inverse quantization on the reshaped key features and the non-key features to generate an original feature sequence;
and performing feature recovery on the original feature sequence by using the information of the feature extraction network to obtain a data sequence.
19. An apparatus for data compression, comprising:
the characteristic sequence acquisition module is configured to process the data sequence by utilizing a characteristic extraction network to acquire a characteristic sequence;
the characteristic determining module is configured to determine every other first number of characteristics in the characteristic sequence as key characteristics, and determine other characteristics as non-key characteristics;
the key feature coding module is configured to process the key features to obtain a coding result of the key features;
a non-key feature encoding module configured to obtain an encoding result of the non-key feature according to a reference feature of the non-key feature, wherein the reference feature is a preset number of features of the non-key feature;
and the sending module is configured to send the information of the feature extraction network and the coding result of the feature sequence to a receiving end so that the receiving end can obtain the data sequence through decoding and feature reconstruction.
20. An apparatus for data decompression, comprising:
the encoding result acquisition module is configured to acquire information of a feature extraction network and an encoding result of the feature sequence, wherein the information of the feature extraction network is sent by a sending end, and the feature sequence comprises key features and non-key features;
an entropy decoding module configured to perform entropy decoding processing on the encoding result to obtain a decoding result of the key feature and a decoding result of the non-key feature;
a key feature obtaining module configured to obtain the key feature based on a decoding result of the key feature;
a non-key feature obtaining module configured to process a decoding result of the non-key feature based on a decoding result of a reference feature of the non-key feature to obtain the non-key feature;
and the characteristic recovery module is configured to perform characteristic recovery according to the key characteristics, the non-key characteristics and the information of the characteristic extraction network to obtain a data sequence.
21. A data compression system comprising:
an apparatus for data compression as recited in claim 19; and
apparatus for data decompression as claimed in claim 20.
22. An electronic device, comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the method of data compression of any of claims 1-12 or the method of data decompression of any of claims 13-18 based on instructions stored in the memory.
23. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of data compression of any one of claims 1 to 12 or the method of data decompression of any one of claims 13 to 18.
CN202211156353.1A 2022-09-22 2022-09-22 Data compression method, data decompression method and related equipment Pending CN115459780A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202211156353.1A CN115459780A (en) 2022-09-22 2022-09-22 Data compression method, data decompression method and related equipment
PCT/CN2023/120392 WO2024061316A1 (en) 2022-09-22 2023-09-21 Data compression method, data decompression method, and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211156353.1A CN115459780A (en) 2022-09-22 2022-09-22 Data compression method, data decompression method and related equipment

Publications (1)

Publication Number Publication Date
CN115459780A true CN115459780A (en) 2022-12-09

Family

ID=84307637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211156353.1A Pending CN115459780A (en) 2022-09-22 2022-09-22 Data compression method, data decompression method and related equipment

Country Status (2)

Country Link
CN (1) CN115459780A (en)
WO (1) WO2024061316A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024061316A1 (en) * 2022-09-22 2024-03-28 中国电信股份有限公司 Data compression method, data decompression method, and related device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572435B (en) * 2012-01-16 2014-03-12 中南民族大学 Compressive sampling-based (CS-based) video coding/decoding system and method thereof
CN111654706B (en) * 2020-05-06 2022-06-28 山东浪潮科学研究院有限公司 Video compression method, device, equipment and medium
CN114257818A (en) * 2020-09-22 2022-03-29 阿里巴巴集团控股有限公司 Video encoding and decoding method, device, equipment and storage medium
US20220109889A1 (en) * 2021-12-16 2022-04-07 Intel Corporation Apparatus, articles of manufacture, and methods for improved adaptive loop filtering in video encoding
CN115459780A (en) * 2022-09-22 2022-12-09 中国电信股份有限公司 Data compression method, data decompression method and related equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024061316A1 (en) * 2022-09-22 2024-03-28 中国电信股份有限公司 Data compression method, data decompression method, and related device

Also Published As

Publication number Publication date
WO2024061316A1 (en) 2024-03-28

Similar Documents

Publication Publication Date Title
US10880566B2 (en) Method and device for image encoding and image decoding
RU2567988C2 (en) Encoder, method of encoding data, decoder, method of decoding data, system for transmitting data, method of transmitting data and programme product
JP4784281B2 (en) Decoding device, inverse quantization method, and program thereof
JP4831547B2 (en) Method for image compression and decompression acceleration
CN108881913B (en) Method and apparatus for image encoding
CN111131828B (en) Image compression method and device, electronic equipment and storage medium
WO2024061316A1 (en) Data compression method, data decompression method, and related device
CN113473142B (en) Video encoding method, video decoding method, video encoding device, video decoding device, electronic equipment and storage medium
KR101277712B1 (en) Method and apparatus for image processing
CN108182712B (en) Image processing method, device and system
CN112887713B (en) Picture compression and decompression method and device
CN112714313A (en) Image processing method, device, equipment and storage medium
CN113810654A (en) Image video uploading method and device, storage medium and electronic equipment
WO2017036061A1 (en) Image encoding method, image decoding method and device
CN108668169B (en) Image information processing method and device, and storage medium
CN113949867B (en) Image processing method and device
JPH05284368A (en) Method and device for encoding/restoring image data
CN103841396A (en) Coding method and system for stereo video
CN111988621A (en) Video processor training method and device, video processing device and video processing method
CN111385579B (en) Video compression method, device, equipment and storage medium
KR100512276B1 (en) Apparatus and Method for Compression of Image Data
WO2022205987A1 (en) Image processing method and system, encoder, and computer readable storage medium
CN116389772B (en) Beidou network-based image transmission method and system
CN116347080B (en) Intelligent algorithm application system and method based on downsampling processing
Tiwari et al. A comparative study on image and video compression techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination