CN112135150A - Image compression and decompression method, readable medium and electronic device thereof - Google Patents

Image compression and decompression method, readable medium and electronic device thereof Download PDF

Info

Publication number
CN112135150A
CN112135150A CN202011015710.3A CN202011015710A CN112135150A CN 112135150 A CN112135150 A CN 112135150A CN 202011015710 A CN202011015710 A CN 202011015710A CN 112135150 A CN112135150 A CN 112135150A
Authority
CN
China
Prior art keywords
image
data
compressed
color channel
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011015710.3A
Other languages
Chinese (zh)
Inventor
孙滨璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ARM Technology China Co Ltd
Original Assignee
ARM Technology China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ARM Technology China Co Ltd filed Critical ARM Technology China Co Ltd
Priority to CN202011015710.3A priority Critical patent/CN112135150A/en
Publication of CN112135150A publication Critical patent/CN112135150A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/70Circuits for processing colour signals for colour killing
    • H04N9/71Circuits for processing colour signals for colour killing combined with colour gain control

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The application relates to the technical field of image processing, and particularly discloses an image compression and decompression method, a readable medium and electronic equipment thereof, wherein the image compression method comprises the following steps: acquiring image data to be compressed and at least one gain factor of gain applied to the image to be compressed; obtaining first intermediate data based on image data to be compressed and the gain factor; and compressing the first intermediate data to obtain compressed image data. The image decompression method comprises the following steps: acquiring compressed image data and a gain factor; decompressing the compressed image data to obtain first intermediate data; based on the first intermediate data and the gain factor, decompressed image data is obtained. According to the method and the device, reverse gain is applied to the image to be compressed, and then the image is compressed, so that the image compression rate is greatly reduced, the occupied bandwidth of the compressed image entering and exiting a DDR is reduced, and the system bandwidth resource is saved; in addition, the image compression and decompression efficiency is improved by adopting a cross compression and cross decompression mode.

Description

Image compression and decompression method, readable medium and electronic device thereof
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image compression method, an image decompression method, a readable medium, and an electronic device using the readable medium.
Background
An Image Signal Processor (ISP) applied to a digital imaging apparatus is a pipelined Image processing-dedicated engine and can process an Image Signal at a high speed. That is, the ISP is the unit that is primarily used to post-process the front-end image sensor output signal, and when the digital imaging device acquires an image, the image processing pipeline (e.g., the ISP pipeline) can apply several image processing algorithms to produce a full-color, processed, compressed image in a standardized image format. In order to facilitate storage, network transmission and management of image files and temporary storage of each frame of image in a video, in electronic devices such as cameras with basic shooting functions, compression-decompression processing of a large number of image files in an ISP pipeline is often required. Existing compression methods can be divided into lossy compression techniques and lossless compression techniques. The lossless compression can better preserve the image quality, but the compression ratio of the relatively lossless compression is higher. Lossy compression can achieve a lower compression rate, greatly compress data of an image file, but affect image quality.
As shown in fig. 1, a typical work flow of image compression/decompression commonly used in the current ISP pipeline is: compressing the image original Data at the front end of the ISP pipeline, writing the image original Data into a Double Data Rate (DDR) synchronous dynamic random access memory (DDR) by using a bit stream, and when the image compressed Data stored in the DDR needs to be used, decompressing the bit stream of the written image compressed Data by accessing the DDR to obtain the image original Data. As shown in fig. 2, which is a flow chart of a typical compression algorithm corresponding to fig. 1, a compression process includes sequentially performing differential preprocessing, quantization and entropy coding on image raw data to obtain compressed data.
The compression in the above exemplary flow may adopt a lossy compression technique or a lossless compression technique. However, since the image in the ISP pipeline may apply various gains, such as a Gain of an image Sensor (Sensor), a Gain brought by an Automatic Gain Control (AGC) module in the ISP pipeline, and a White Balance (WB) Gain, etc., quantization errors are increased, and related statistical characteristics beneficial to image data compression are destroyed, and a compression ratio may not reach an ideal value due to an intuitive influence, so that image compression data written in or read by accessing the DDR occupies a large bandwidth, and a read-write speed/efficiency is low, which is not beneficial to improving ISP processing performance.
Disclosure of Invention
The embodiment of the application provides an image compression and decompression method, a readable medium and electronic equipment thereof, wherein when image data to be compressed is acquired, the data size of the image to be compressed is reduced to the data size before the gain is applied by acquiring the gain factor of the gain applied to the image data to be compressed and dividing the image data to be compressed by the gain factor to perform reverse gain, and then the image is compressed to reduce the image compression ratio so as to reduce the bandwidth occupied in the process of writing the compressed image into a DDR (double data rate) and save the system bandwidth resource. Meanwhile, the image compression method provided by the application also improves the compression and decompression efficiency of the image data by adopting a cross compression and cross decompression mode.
In a first aspect, an embodiment of the present application provides an image compression method, which is used for an electronic device with a shooting function, and the method includes: acquiring image data to be compressed and at least one gain factor of gain applied to the image to be compressed; obtaining first intermediate data based on the image data to be compressed and the gain factor; and compressing the first intermediate data to obtain compressed image data.
Under the condition that a gain is applied to the image to be compressed, dividing the image data to be compressed by a gain factor of the gain to obtain first intermediate data; and under the condition that more than two gains are applied to the image to be compressed, dividing the image data to be compressed by the product of gain factors of the more than two gains to obtain the first intermediate data.
For example, a gain may be applied to the acquired image data to be compressed, and a gain factor may be acquired while the image to be compressed is acquired; two or more gains may be applied to the acquired image data to be compressed, two or more gain factors corresponding to the number of gains may be acquired while the image to be compressed is acquired, and the gain factors are accumulated in the form of a product. Before compressing the image data to be compressed, an inverse gain is applied to the image data to be compressed, divided by the corresponding gain factor or product of gain factors.
In one possible implementation of the first aspect, the image data to be compressed includes a plurality of color channel groups arranged in a cycle; the arrangement mode of the color channels in each color channel group is the same as the number of the color channels; at least two color channels are included in each color channel group and arranged in a cross; the method further comprises the following steps: the first intermediate data comprises the plurality of color channel groups which are circularly arranged, and each color channel data in the color channel groups is extracted and arranged to obtain second intermediate data; compressing the second intermediate data to obtain third intermediate data; and fusing the third intermediate data according to the arrangement mode of the color channels in the color channel group to obtain the compressed image data.
For example, the image data to be compressed after the reverse gain is compressed in a cross compression manner, that is, the color channel data in the image data to be compressed is extracted and compressed respectively, and then the compressed color channel data is fused and restored to obtain the compressed image data, wherein the data arrangement manner of the color channels in each color channel group in the compressed image data is the same as the arrangement manner of the color channels in each color channel group in the image data to be compressed, and the number of the color channels in each color channel group is also the same.
In a possible implementation of the first aspect, the color channel data includes pixel values of color channels in pixel point data of the image to be compressed, and the color channel data in the color channel group is extracted and arranged, where the method further includes: and sequentially extracting pixel values on the color channels according to the scanning sequence of the pixel points on the image to be compressed, and arranging the pixel values on the color channels according to the scanning sequence of the pixel points to obtain the second intermediate data.
For example, for the image data to be compressed after the inverse gain, the extracted data of each color channel is the pixel value data of each color channel, and in the image compression process, the image data to be compressed, the data of each color channel, the pixel value data, and the like are transmitted in the form of a bit stream.
In one possible implementation of the first aspect, the second intermediate data includes pixel values on each of the color channels, and the method further includes: and compressing the pixel values on each color channel through differential preprocessing, quantization and entropy coding in sequence to obtain the third intermediate data.
For example, the bit streams of the color channel data are compressed by a typical compression algorithm process, where the typical compression algorithm process mainly includes three processes of differential preprocessing, quantization, and entropy coding, which are performed sequentially, where each process compresses the data once, and the compressed data obtained by each process is used as the data to be compressed in the next process.
In a possible implementation of the first aspect, the method further includes: the compressed image data includes the cyclically arranged plurality of color channel groups.
For example, the compressed image data and the image data to be compressed have the same number of circularly arranged color channel groups, and the arrangement and number of the color channels in each color channel group are the same.
In a second aspect, an embodiment of the present application provides an image decompression method, where the method includes: acquiring the compressed image data and the gain factor obtained by the image compression method; decompressing the compressed image data to obtain the first intermediate data; obtaining decompressed image data based on the first intermediate data and the gain factor.
Under the condition that a gain is applied to the image to be compressed, multiplying the first intermediate data by a gain factor of the gain to obtain decompressed image data; and under the condition that more than two gains are applied to the image to be compressed, multiplying the first intermediate data by the product of gain factors of the more than two gains to obtain the decompressed image data.
For example, when the inverse gain is applied to the image data to be compressed, the inverse gain is divided by a gain factor of one gain, and then the compressed image data is decompressed and multiplied by the same gain factor (i.e. the gain is reapplied) in the decompression process to obtain decompressed image data; when the inverse gain is applied to the image data to be compressed, the inverse gain is divided by the product of two or more gain factors of two or more gains, so that the compressed image data is decompressed and multiplied by the product of the same two or more gain factors (i.e. the gain is reapplied) in the decompression process to obtain the decompressed image data.
In one possible implementation of the second aspect, the compressed image data includes a plurality of color channel groups arranged in a cycle; the arrangement mode of the color channels in each color channel group is the same as the number of the color channels; at least two color channels are included in each color channel group and arranged in a cross; the method further comprises the following steps: extracting each color channel data in the color channel group for arrangement to obtain third intermediate data; decompressing the third intermediate data to obtain second intermediate data; fusing the second intermediate data according to the arrangement mode of the color channels in the color channel group to obtain the first intermediate data; and multiplying the first intermediate data by the gain factor to obtain the decompressed image data.
For example, the compressed image data is decompressed in a cross decompression manner, that is, color channel data in the compressed image data is extracted and decompressed respectively, and then the decompressed color channel data is fused and restored to obtain the decompressed compressed image data, wherein the color channels of each color channel group in the decompressed compressed image data are arranged in the same manner, and the number of the color channels in each color channel group is the same. It should be noted that the decompressed compressed image data obtained by re-applying the gain to the decompressed compressed image data may not be mixed in the description of the present invention.
In a possible implementation of the second aspect, the color channel data includes pixel values on each color channel in each pixel point data on the compressed image data, and the method further includes: and sequentially extracting pixel values on the color channels according to the scanning sequence of the pixel points of the image to be compressed, and arranging the pixel values on the color channels according to the scanning sequence of the pixel points to obtain third intermediate data.
For example, each color channel data extracted for the compressed image data is pixel value data of each color channel, and the compressed image data, the decompressed image data, and each color channel data, pixel value data, and the like are transmitted in the form of a bit stream in the image decompression processing.
In one possible implementation of the second aspect, the third intermediate data includes pixel values on the respective color channels, and decompressing the third intermediate data includes: sequentially performing entropy decoding, inverse quantization and inverse differential preprocessing on the pixel values on each color channel to obtain the second intermediate data, wherein the entropy decoding is an inverse process of entropy coding, the inverse quantization is an inverse process of quantization, and the inverse differential preprocessing is an inverse process of differential preprocessing.
For example, the bit streams of the color channel data are decompressed through a typical decompression algorithm flow, which is the inverse process of the typical compression algorithm flow, and the typical decompression algorithm flow mainly includes the entropy decoding, inverse quantization and inverse difference preprocessing performed in sequence.
In one possible implementation of the second aspect, the decompressed image data is the same as the image data to be compressed. For example, the decompressed image data and the image data to be compressed have the same number of circularly arranged color channel groups, and the arrangement and number of color channels in each color channel group are the same.
In a third aspect, an embodiment of the present application provides a readable medium, where instructions are stored, and when executed on an electronic device, the instructions cause the electronic device to execute the above-mentioned image compression method or the above-mentioned image decompression method.
In a fourth aspect, an embodiment of the present application provides an electronic device, including: a memory for storing instructions for execution by one or more processors of the electronic device, and a processor, which is one of the processors of the electronic device, for performing the image compression method and the image decompression method.
Drawings
FIG. 1 illustrates an exemplary workflow diagram for image compression, according to an embodiment of the application.
FIG. 2 shows a flow diagram of an exemplary compression algorithm, according to an embodiment of the present application.
FIG. 3 shows a schematic diagram of an image generation process with a shot photograph as an exemplary scene, according to an embodiment of the application.
Fig. 4 shows a schematic diagram of a process for processing an image in the ISP103 according to an embodiment of the present application.
Fig. 5 shows a workflow diagram of the image compression and decompression method of the present application, according to an embodiment of the present application.
FIG. 6 shows a flow diagram of a cross-compression algorithm, according to an embodiment of the present application.
Fig. 7 shows a flow diagram of a cross-decompression algorithm, according to an embodiment of the present application.
Fig. 8 shows a schematic structural diagram of an electronic device 100 according to an embodiment of the present application.
FIG. 9 illustrates a system block diagram of a System On Chip (SOC), according to an embodiment of the present application.
Detailed Description
Illustrative embodiments of the present application include, but are not limited to, image decompression methods, readable media, and electronic devices thereof.
It is to be appreciated that as used herein, the term module may refer to or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable hardware components that provide the described functionality, or may be part of such hardware components.
FIG. 3 is a schematic diagram of an image generation process with a photograph taken as an exemplary scene. As shown in fig. 3, the process of generating images (including each frame of image in video) is generally: an optical signal generated by a Lens (Lens)101 of an electronic device 100 having a basic photographing function is transmitted to a photosensitive area on the surface of an image sensor 102, and then is photoelectrically converted by the image sensor 102 to form RAW (RAW) image data (for example, Bayer format) and transmitted to an ISP 103. The ISP103 outputs the BMP format or YUV format image to the image acquisition unit at the back end through algorithm processing to obtain an image as a shooting result.
The electronic device 100 includes a camera, a video camera, a tablet computer, a smart phone, and other electronic devices with basic shooting function using an ISP chip, which is not limited herein.
The ISP103 is composed of ISP logic and Firmware (Firmware) running thereon, and the logic unit can count real-time information of a current image besides completing a part of algorithm processing. The Firmware acquires the image statistical information of the ISP logic, recalculates the image statistical information, and feeds back the control lens, the image sensor 102 and the ISP logic to achieve the purpose of automatically adjusting the image quality. The Firmware of the ISP103 comprises three parts, wherein one part is an ISP control unit and a basic algorithm library, one part is an AE/AWB/AF algorithm library, and the other part is a sensor library.
Fig. 4 is a schematic diagram of a process for processing an image in the ISP 103. As shown in fig. 3, RAW image data (e.g., Bayer format) entering the ISP103 for processing sequentially passes through the following modules in the ISP 103: a black level compensation (black level compensation) module 1031, a lens shading correction (lens shading correction) module 1032, a bad pixel correction (bad pixel correction) module 1033, a color interpolation (demosaic) module 1034, a Bayer noise removal module 1035, a White Balance (WB) correction module 1036, a color correction (color shading) module 1037, a gamma (gamma) correction module 1038, a color space conversion (RGB to YUV) module 1039, a color noise removal and edge enhancement module 1040 in the YUV color space, a color and contrast enhancement module 1041, an automatic exposure control module 1042 in the middle, an automatic gain control module 1043, etc., and then outputs YUV (or RGB) format data, which is then transmitted to a Central Processing Unit (CPU) through an I/O interface for Processing. The ISP103 algorithm module includes a compression-decompression module 1044 for compressing an image or accessing a DDR to perform image decompression, where the compression-decompression module 1044 can be accessed to any node of the image processing, and a specific access position is determined according to an actual application requirement.
In an ISP pipeline, an image may be compressed-decompressed at any node in the image processing link in ISP103 due to image or video processing requirements. For example, when processing a video image, some data in a previous frame image is needed when processing each frame image, in this case, the current frame image data needs to be compressed and then written into the DDR for temporary storage, and the DDR is accessed during processing a next frame image to decompress and read the temporary frame image data. Because the bandwidth for reading and writing the DDR is limited, in order to reduce the occupation of the bandwidth when the image data is written into or read from the DDR as much as possible, the image data needs to be compressed and then encoded and written into the DDR, the decoding and decompression are the inverse processes of the encoding and the compression, the lower the compression ratio of the image is, the smaller the occupation of the bandwidth when the compressed image enters or exits the DDR is, the efficiency for reading and writing the image data can be correspondingly improved, and particularly, the influence of the bandwidth on the reading and writing of the image data when other software or a CPU needs to access the DDR can be smaller.
As described above, a typical work flow of compressing and decompressing image data in an ISP pipeline is shown in fig. 1, because an image in the ISP pipeline applies a plurality of gains, and gains applied at different positions of an image processing link in the ISP pipeline are different, the compression ratio of the image when the image is compressed is high due to the gains, and a bandwidth occupied by writing or reading the compressed image into or from a DDR is large, so efficiency of compressing and decompressing is also low. For example, if the image is temporarily compressed before the color interpolation module 1034 in the ISP pipeline, the gains already applied to the image here include lens shading gains, etc.; if the image is compressed and temporarily stored before the white balance correction module 1036 in the ISP pipeline, the applied gains of the image include the lens shading gain, the sensor analog gain, the sensor digital gain (the sensor gain includes the sensor analog gain and the sensor digital gain), and the like; if the image is temporarily stored after color space conversion module 1039 in the ISP pipeline, the gains already applied to the image include lens shading gain, sensor analog gain, sensor digital gain, and WB gain. The degree of gain varies, and the compression ratio of the image also varies, and generally, the more gain applied to the image, the greater the compression ratio of the image increases, and the poorer the compressibility of the image becomes.
For the above problem, the image compression method provided by the application reduces the data size of the image data to be compressed to be the same as the data size of the original image data before applying the gain by dividing the image data to be compressed by the corresponding gain factor before compression, and then compresses the image data to be compressed after dividing by the gain factor, so as to reduce the image compression ratio, reduce the bandwidth occupied by the compressed image in the process of writing the image into the DDR, and save the system bandwidth resources. Meanwhile, the image compression method provided by the application also improves the compression and decompression efficiency of the image data by adopting a cross compression and cross decompression mode, and the cross compression and cross decompression mode adopted by the application is suitable for various image color formats, such as a Bayer format, a BMP format and a YUV (applied YUV color space) format.
For ease of understanding, a brief description will be given below of some concepts involved in the present embodiment.
(1) Related concepts in a typical compression algorithm flow
Typical compression algorithms are now more widely used in ISP pipelines. As shown in fig. 2, the compression process of a typical compression algorithm mainly includes outputting compressed data after performing differential preprocessing-quantization-entropy coding on original data.
In a typical image algorithm flow, the differential preprocessing mainly means that only the first pixel point data is stored for the acquired pixel point data to be compressed, and the difference value between each subsequent pixel point data and the first pixel point data is only stored for each subsequent pixel point data. For example, for a Bayer-format image, the pixel values of adjacent pixels (e.g., 4 pixels) of the image before compression are (200, 200, 200, 200), 10 bits (the range of the stored data amount is 0 to 1023) or 4 × 8 bits are required for storing the four pixels, the difference between the pixel values of the four pixels can be divided into (200, 0, 0, 0) after the difference preprocessing, and only 8 bits are required for storing the four pixels. In general, differential preprocessing can efficiently compress pixel value data to reduce the space required for storage.
In a typical compression algorithm flow, quantization refers to a process of mapping and converting data with a certain bit width after differential preprocessing into data with a smaller bit width, and the quantization process is a process of discarding pixel points with larger data in an image or reducing pixel point data with larger data to reduce the image storage occupancy. The quantization includes uniform quantization and non-uniform quantization.
The uniform quantization refers to converting all the pixels according to a certain rule, for example, uniformly quantizing all the pixels from 256 gray levels to 128 gray levels, i.e., quantizing the data of the pixel point with the pixel value of 256 divided by 2 to 128, and quantizing the data of the pixel point with the pixel value of 230 divided by 2 to 115.
Non-uniform quantization, which means that a targeted quantization level is set according to the current bit width of each pixel to quantize the pixel, for example, the pixel within the range of 0 to 16 gray levels can be set to be not quantized, and the original value is reserved; only 16 gray levels are reserved for pixel points within the range of 16-32 gray levels, for example, the data of a pixel point with the pixel value of 32 is quantized to 16 by dividing 2, and the data of a pixel point with the pixel value of 28 is quantized to 14 by dividing 2; the pixels in the range of 128-256 gray levels are uniformly quantized to 64 gray levels, for example, the data of a pixel point with a pixel value of 256 is quantized to 64 by dividing 4, and the data of a pixel point with a pixel value of 240 is quantized to 60 by dividing 4. The data of four adjacent pixel points after the difference preprocessing is (200, 0, 0, 0), the storage needs 8 bits, and the data of four adjacent pixel points after the data is quantized to 32 gray levels is (20, 0, 0, 0).
In a typical compression algorithm flow, entropy encoding refers to further encoding and compressing a quantized bit stream sequence, that is, encoding without losing any information according to the entropy principle in the encoding process. It can be understood that the bit stream sequence obtained by entropy coding is shorter than the original sequence, so that the bit stream after entropy coding can be further compressed, thereby improving the transmission efficiency and reducing the bandwidth occupancy rate when entering and exiting the DDR. Common types of coding used for entropy coding include, but are not limited to, Huffman coding, golomb coding, Shannon coding, and arithmetic coding.
(2) Bayer format: refers to the array of original image pixels covered by the Bayer format filter matrix columns, typically the earlier format of the image. The Bayer pattern satisfies the GRBG rule, and image data in the Bayer pattern represents one pixel point by only one of R, G, B three values. Therefore, each pixel point can only capture one of the three primary colors R, G and B, and the other two color values are lacked, so that a mosaic image is obtained at this time. In order to obtain a full-color image, the missing two other colors need to be estimated by using the color information of the surrounding pixels, and this process is called color interpolation, also called color interpolation (demosaic) or demosaicing.
(3) BMP (bitmap) format, one of the image formats, the image in BMP format uses RGB color space. Each pixel point in the image of the RGB color space includes color values of three color channels, red (R), green (G), and blue (B), to represent one pixel point. Is one of the most widely used color systems at present.
(4) The YUV format is one of image formats, and an image in the YUV format uses a YUV color space. Wherein the Y channel describes a luminance block (Luma) signal, the Y value ranges between light and dark, and the Luma signal is a signal that can be seen by black and white TV; the U channel describes the blue chrominance component (Cb) and the V channel describes the red chrominance component (Cr). Common sampling formats of YUV are YUV4:4:4, YUV4:2:2, YUV4:1:1 and YUV4:2: 0. For example, for YUV4:4:4 samples, each Y corresponds to a set of UV components; for YUV4:2:2 sampling, every two ys share a set of UV components; for YUV4:2:0 sampling, every fourth Y shares a set of UV components. The YUV4:2:0 sampling format is more applied, and each pixel point in an image in the YUV4:2:0 color space comprises color values of two color channels, namely a Y channel and a U or V channel, so that the arrangement mode of pixel value data color channels of two continuous pixel points in the image in the YUV4:2:0 color space is generally in the YUYV form.
(5) Applying a gain: the gain (gain) is the degree of increasing the current, voltage or power of a component, a circuit, a device or a system, and the application of the gain to an image is the degree of increasing the image data by multiplying the gain factor when the image signal is weak or the data processing requirement is not met due to external environmental factors in the image generation and processing process. Applying gain is a necessary process in the image generation process for all images or video. For example, in the case of dark light, a photograph is taken subject to adjustment by automatic exposure control, wherein the brightness of the taken image is applied with an exposure gain, wherein a sensor gain, a White Balance (WB) gain, etc. may also be applied. An image captured even in an ideal lighting state passes through the sensor gain, WB gain, and the like.
The following describes the specific flow of image compression-decompression according to this embodiment in detail with reference to fig. 5-7.
Fig. 5 shows a schematic flow chart of image compression-decompression in the ISP103 in the present embodiment. The process of image compression-decompression according to the present application includes the following steps.
501: the method includes acquiring image data to be compressed and acquiring a gain factor of at least one gain applied on the image data to be compressed.
In the ISP103, the image data is transmitted or stored in a bit stream in the ISP pipeline, and the larger the image data is, the more bits are required for transmission or storage, and the gain factor applied to the image at any processing stage can be obtained from the corresponding component in the electronic device 100. For example, the gain of the sensor is correspondingly determined by the relevant parameters of the sensor, and the value of the gain factor can be directly obtained by the relevant parameters of the sensor; the gain factor of the agc block 1043 is calculated by its own agc algorithm, so that the value of the gain factor can be obtained from the algorithm of the agc block 1043. Therefore, the compression-decompression module 1044 can acquire the type of gain already applied to the image data to be compressed and the corresponding gain factor at the same time of acquiring the image data to be compressed.
The type of gain applied to the image data to be compressed also depends on the processing stage in which the image data to be compressed is obtained, i.e. the image data to be compressed is obtained at different image processing stages, and the corresponding types of gain applied to the image data to be compressed may be different. For example, the image data to be compressed may be acquired from the image sensor 102, or may be acquired from the automatic gain control module 1043. Wherein, if the image data to be compressed is obtained from the image sensor 102, the type of gain applied on the image to be compressed includes a sensor gain, the sensor gain includes a sensor analog gain and a sensor digital gain, and the sensor analog gain is applied preferentially when the raw image data is formed by the sensor in general; if the image data to be compressed is acquired from the white balance correction module 1036, the types of gains applied to the image to be compressed include a white balance gain, a sensor gain before the white balance correction module 1036, and the like.
It will be appreciated that if more than two gain types are applied to the image to be compressed, then the gain factor by which the gain is multiplied is applied to the image data to be compressed is the product of the gain factors corresponding to each gain type. For example, if the image data to be compressed is obtained from the white balance correction module 1036, the gain factor of the sensor analog gain is k1The gain factor of the sensor digital gain is k2The gain factor of the white balance gain is k3The product of the gain factors applied to the correspondingly acquired image data to be compressed is then k1*k2*k3
502: and dividing the acquired image data to be compressed by the corresponding gain factor, and restoring the image data to be compressed to original image data with smaller data size before the gain is applied.
It will be appreciated that the process of dividing by the gain factor may be referred to as reverse gain. For example, the product of all gain factors applied on the image data to be compressed is k1*k2*k3Then, in the course of the reverse gain, divided by the gain factor k in turn1、k2、k3Or divided by the product k of the gain factor1*k2*k3The original image data before the gain is applied is obtained, and the storage space required by the original image data is much smaller than that of the image data to be compressed, so that the reverse gain is beneficial to reducing the image compression rate.
503: and extracting pixel value data on each color channel from the original image data after the reverse gain, compressing the extracted pixel value data on each color channel through a typical compression algorithm, fusing the compressed pixel value data in each color channel into compressed image data according to the arrangement mode of each color channel in the original image data, further obtaining the compressed image data, and completing the image compression.
The compression method of extracting the pixel value data of each color channel from the original image data and compressing the pixel value data, and then fusing the compressed pixel value data in each color channel into compressed image data according to the arrangement mode of each color channel in the original image data can be called cross compression. It is understood that the image data to be compressed includes a plurality of color channel groups arranged in a cycle, and the arrangement of the color channels and the number of the color channels in each color channel group are the same, so that the original image data correspondingly has the same color channel group as the image data to be compressed. Therefore, extracting the pixel value data of each color channel can also be understood as separating the data on different color channels in each color channel group and arranging the data in the corresponding color channels according to the scanning sequence of the pixel points.
The conventional compression method is to sequentially and respectively read the pixel value data of each color channel for compression according to the scanning sequence of image pixel points, and read the pixel value data of the next color channel after the pixel value data of the current color channel is compressed. For example, when an image in a BMP format is compressed, pixel value data of an R channel is read for a currently scanned pixel, after the compression is completed, pixel value data of a G channel is read for compression, then pixel value data of a B channel is read, data reading processes of three color channels occupy system bandwidth to different degrees, the data reading processes of the three color channels mutually affect each other to cause a large time delay (delay), and the delay is larger particularly when a DDR has other programs or CPU accesses at the same time. Therefore, the traditional compression method has low compression efficiency and large occupied bandwidth. There are also schemes for compressing pixel value data of three color channels by a compression engine provided with the three color channels, respectively, but this approach does not reduce the occupation of bandwidth while increasing the cost. Therefore, in order to solve the above problem, the present application proposes to perform compression by using a cross compression method.
An exemplary flow of cross-compression includes:
1) obtaining pixel value data (bit stream) of each pixel point in original image data obtained after applying a reverse gain to an image to be compressed to the same data channel according to a pixel point scanning sequence, wherein each obtained pixel point may include two (e.g., A, B two color channels) or three color channels (e.g., A, B, C three color channels), the number of the color channels may further include more, and the pixel values of each color channel of the pixel value data of consecutive pixel points are regularly and circularly arranged in a cross manner (e.g., ABABAB or ABCABC);
2) sequentially extracting the acquired pixel value data of each pixel point to each color channel, and arranging the pixel value data of the same color channel together (such as AAA or BBB) according to the pixel point scanning sequence, thereby obtaining the pixel value data on each color channel;
3) compressing the pixel value data on each color channel by a typical compression algorithm respectively to correspondingly obtain the compressed pixel value data (such as A ' A ' A ' or B ' B ') on each color channel;
4) and fusing the compressed pixel value data according to the arrangement mode of each color channel in the original image data, restoring the original position of each pixel value data in each color channel, wherein the arrangement mode of the color channel of each pixel point data in the compressed image data obtained by fusing is the same as that of the color channel of each pixel point in the original image data obtained in the process 1) (for example, A 'B' A 'B' A 'B').
The process of cross-compression will be explained later and will not be described in detail here.
After the compression of the image data is completed according to the above-described cross-compression method, the process of the flow 504 is continued for the obtained compressed image data.
504: and writing the bit stream of the compressed image data into the DDR for temporary storage.
Compared with the bit stream of compressed image data obtained by directly compressing image data to be compressed without applying reverse gain, the bit stream of the compressed image data obtained by applying reverse gain to the image to be compressed is greatly reduced, namely, the application of the reverse gain greatly reduces the image compression ratio, the image data to be compressed is compressed into the compressed image data with smaller bit stream, the bandwidth of a data channel is greatly saved in the process of writing in DDR, the transmission efficiency and the writing efficiency are improved, and meanwhile, the compression efficiency of the image is greatly improved by adopting a cross compression mode.
505: and when the image data written with the DDR temporary storage needs to be used, accessing the DDR to read the compressed image data. Because the bit stream of the compressed image data is small, the bandwidth occupied by the compressed image data in the DDR access process is small, and the system bandwidth is saved.
506: and decompressing the read compressed image data in a cross decompression mode to obtain decompressed original image data. The cross decompression process comprises the steps of extracting pixel value data of each color channel in compressed image data, decompressing the extracted pixel value data in each color channel through entropy decoding, inverse quantization and contrast processing in sequence, and fusing the decompressed pixel value data according to the arrangement mode of each color channel in the compressed image data to obtain a decompressed image data bit stream, wherein the bit stream is the same as the original image data bit stream.
It is understood that the cross-decompression process is the inverse of the cross-compression process, the entropy decoding in the cross-decompression process is the inverse of the entropy encoding, the inverse quantization in the cross-decompression process is the inverse of the quantization, and the disparity processing in the cross-decompression process is the inverse of the differential preprocessing.
The process of interleaving decompression will be explained later and will not be described in detail herein.
507: and (3) reapplying the gain to the decompressed original image data, namely, outputting the image data with reapplied gain after the corresponding data is multiplied by the gain factor removed by the reverse gain in the process 502 to obtain decompressed image data, and completing the image decompression. The image data after the gain is reapplied is the same as the number of the images to be compressed obtained in the above flow 501.
For example, the product of all the gain factors applied to the image data to be compressed obtained in the above-mentioned flow 501 is k1*k2*k3Divided by the gain factor k in sequence during the reverse gain1、k2、k3The raw image data before the gain is applied is obtained. Therefore, in the process of applying the gain to the decompressed original image data again, the product k of the decompressed original image data multiplied by all the gain factors is required1*k2*k3The number of the image data to be compressed after the gain is reapplied is the same as the number of the image data to be compressed obtained in the above flow 501.
Next, the process of cross-compression in step 503 above will be explained with reference to fig. 6.
Specifically, as shown in fig. 6(a), if the acquired image to be compressed is an image in Bayer format, the arrangement of the respective color channels of the pixel points thereof may be expressed in ABABAB form, where A, B denotes the color channels. It can be understood that each pixel point of the Bayer pattern image has only one color channel, and the color channels of adjacent pixel points may be different, so that the above-mentioned a and B represent the color channels of two adjacent pixel points, and the A, B color channels may be the same or different. Taking the continuous acquisition of 6 pixel data on the image to be compressed as an example, when the image to be compressed is compressed after applying the reverse gain:
1) acquiring a pixel point data bit stream of the Bayer format image after the reverse gain to the same data channel, wherein the color channel arrangement mode of each pixel point data of the acquired Bayer format image may be B3A3B2A2B1A1 (wherein B3, B2 and B1 represent pixel value data of a B color channel in 6 continuous pixel point data, and A3, A2 and A1 represent pixel value data of an A color channel in 6 continuous pixel point data);
2) extracting pixel value data on each color channel from each pixel point data of the Bayer format image to obtain bit streams of two color channel data, namely B3B2B1 and A3A2A 1;
3) compressing the bit streams of the two color channel data respectively to obtain two compressed color channel data bit streams B3'B2' B1 'and A3' A2'A1' (wherein B3', B2', B1 'represent pixel value data after B color channel compression, and A3', A2 'and A1' represent pixel value data after A color channel compression);
4) the two compressed color channel data bit streams are fused into the same bit stream according to the color channel arrangement manner of each pixel point data of the Bayer pattern image in the above 1), so as to obtain a bit stream of compressed image data in the Bayer pattern, wherein the arrangement manner of each color channel of the bit stream, B3'A3' B2'A2' B1'a1', is the same as the color channel arrangement manner, B3A3B2A2B1a1, of each pixel point data of the Bayer pattern image obtained in the above 1).
As shown in fig. 6(b), if the acquired image to be compressed is an image in BMP format, the arrangement of the color channels of the pixel points can be represented in abcabcabc format, where A, B, C represents the color channels. It is understood that each pixel point of the BMP format image has three color channels (R, G, B), so that the above A, B, C can represent R channel, G channel, and B channel, respectively. Taking continuously obtaining 2 pixel data on the image to be compressed as an example, when the image to be compressed is compressed after applying the reverse gain:
1) acquiring each pixel point data bit stream of the BMP format image after the reverse gain to the same data channel, wherein the color channel arrangement mode of each pixel point data of the acquired BMP format image may be C2B2A2C1B1a1 (where C2 and C1 represent pixel value data of a C color channel in 2 consecutive pixel point data, B2 and B1 represent pixel value data of a B color channel in 2 consecutive pixel point data, and A2 and a1 represent pixel value data of an a color channel in 2 consecutive pixel point data);
2) extracting pixel value data on each color channel from each pixel point data of the BMP format image to obtain bit streams of three color channel data, namely C2C1, B2B1 and A2A 1;
3) compressing the three color channel data bit streams respectively to obtain three compressed color channel data bit streams C2'C1', B2'B1' and A2'A1' (wherein C2 'and C1' represent C color channel compressed pixel value data, B2 'and B1' represent B color channel compressed pixel value data, and A2 'and A1' represent A color channel compressed pixel value data);
4) fusing the three compressed color channel data bit streams into the same bit stream according to the color channel arrangement mode of each pixel point of the BMP format image in the above 1), to obtain a BMP format compressed image data bit stream, where the arrangement mode of each color channel of the bit stream is C2'B2' A2'C1' B1'a1' is the same as the color channel arrangement mode C2B2A2C1B1a1 of each pixel data of the BMP format image in the above 1).
As shown in fig. 6(c), if the acquired original image is an image in YUV format, the data format of the pixel values of the pixel points thereof may be expressed in the form of abaacabac, where A, B, C denotes a color channel. It is understood that each pixel point of the BMP format image has two color channels (Y, U) or (Y, V), and thus A, B, C represents Y channel, U channel, and V channel, respectively. Taking continuously obtaining 2 pixel data on the image to be compressed as an example, when the image to be compressed is compressed after applying the reverse gain:
1) acquiring each pixel point data bit stream of the YUV format image after the reverse gain to the same data channel, wherein the color channel arrangement mode of each pixel point data of the acquired YUV format image may be C2A4B2A3C1A2B1a1 (where C2 and C1 denote pixel value data of a C color channel in 4 consecutive pixel point data, B2 and B1 denote pixel value data of a B color channel in 4 consecutive pixel point data, and A4, A3, A2 and a1 denote pixel value data of an a color channel in 4 consecutive pixel point data);
2) extracting pixel value data on each color channel from each pixel point data of the YUV format image to obtain bit streams of three color channel data, namely C2C1, B2B1 and A4A3A2A 1;
3) compressing the three color channel data bit streams respectively to obtain three compressed color channel data bit streams C2'C1', B2'B1' and A4'A3' A2'A1' (wherein C2 'and C1' represent C color channel compressed pixel value data, B2 'and B1' represent B color channel compressed pixel value data, and A4', A3', A2 'and A1' represent A color channel compressed pixel value data);
4) fusing the compressed three color channel data bit streams into the same bit stream according to the color channel arrangement mode of each pixel point of the YUV format image in the above 1), to obtain a compressed image data bit stream in the YUV format, where the arrangement mode of each color channel of the bit stream is C2'A4' B2'A3' C1'A2' B1'a1' is the same as the color channel arrangement mode C2A4B2A3C1A2B1a1 of each pixel point data of the YUV format image in the above 1).
In some embodiments, when compressing other image formats in the cross-compression manner, the image data may be obtained by inference in the cross-compression manner based on a pixel value data rule of pixels of the original image, which is not limited herein.
The typical compression algorithm applied in the cross-compression process is described with reference to fig. 2 and the related description, and will not be described herein again.
The interleaving decompression process used in step 507 is explained below with reference to fig. 7.
As shown in fig. 7, the interleaving decompression process shown in fig. 7(a) is the inverse of the interleaving compression process shown in fig. 6(a), the interleaving decompression process shown in fig. 7(b) is the inverse of the interleaving compression process shown in fig. 6(b), and the interleaving decompression process shown in fig. 7(c) is the inverse of the interleaving compression process shown in fig. 6 (c). The processing procedure of the cross decompression and the cross compression is basically the same, but the difference is that the bit stream obtained by the cross decompression in the same data channel is the bit stream of the compressed image data, and the bit stream obtained by the decompression and merging in the same data channel is the bit stream of the original image data.
Fig. 8 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application. The electronic device 100 may include a processor 110, a power module 140, a memory 180, a mobile communication module 130, a wireless communication module 120, a sensor module 190, an audio module 150, a camera 170, an interface module 160, buttons 201, a display 202, and the like.
It is to be understood that the illustrated structure of the embodiment of the present invention does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more Processing units, for example, Processing modules or Processing circuits that may include a CPU, an ISP, a Graphics Processing Unit (GPU), a DSP, a Microprocessor (MCU), an Artificial Intelligence (AI) processor, a Programmable logic device (FPGA), or the like. The different processing units may be separate devices or may be integrated into one or more processors. A memory unit may be provided in the processor 110 for storing instructions and data. In some embodiments, the memory units in processor 110 include cache memory 180 and a DDR. The ISP applies the image compression-decompression method of the application to reduce the occupation amount of the data channel bandwidth in the process of writing the image data into the storage unit (such as DDR) or reading the temporarily stored image data from the storage unit, and the compression method of the application can greatly reduce the image compression ratio, so the transmission efficiency of the compressed image is greatly improved.
The power module 140 may include a power supply, power management components, and the like. The power source may be a battery. The power management component is used for managing the charging of the power supply and the power supply of the power supply to other modules. In some embodiments, the power management component includes a charge management module and a power management module. The charging management module is used for receiving charging input from the charger; the power management module is used for connecting a power supply, the charging management module and the processor 110. The power management module receives power and/or charge management module input and provides power to the processor 110, the display 202, the camera 170, and the wireless communication module 120.
The mobile communication module 130 may include, but is not limited to, an antenna, a power amplifier, a filter, an LNA (Low noise amplifier), and the like.
The wireless communication module 120 may include an antenna, and implement transceiving of electromagnetic waves via the antenna. The wireless communication module 120 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The electronic device 100 may communicate with networks and other devices via wireless communication techniques.
In some embodiments, the mobile communication module 130 and the wireless communication module 120 of the electronic device 100 may also be located in the same module.
The display screen 202 is used for displaying human-computer interaction interfaces, images, videos and the like. The display screen 202 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like.
The sensor module 190 may include a proximity light sensor, a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
The audio module 150 is used to convert digital audio information into an analog audio signal output or convert an analog audio input into a digital audio signal. The audio module 150 may also be used to encode and decode audio signals. In some embodiments, the audio module 150 may be disposed in the processor 110, or some functional modules of the audio module 150 may be disposed in the processor 110. In some embodiments, audio module 150 may include speakers, an earpiece, a microphone, and a headphone interface.
The camera 170 is used to capture still images or video. The object generates an optical image through the lens 101 and projects the optical image onto the photosensitive element. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. The electronic device 100 may implement a shooting function through the ISP103, the camera 170, a video codec, a Graphics Processing Unit (GPU), the display 202, and an application processor.
The interface module 160 includes an external memory interface, a Universal Serial Bus (USB) interface, a Subscriber Identity Module (SIM) card interface, and the like. The external memory interface may be used to connect an external memory card, such as a Micro SD card, to extend the storage capability of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface to implement a data storage function. The universal serial bus interface is used for communication between the electronic device 100 and other electronic devices. The SIM card interface is used for communicating with a SIM card mounted to the electronic device 100, for example reading a telephone number stored in the SIM card, or writing a telephone number to the SIM card.
In some embodiments, the handset 10 also includes keys 201, motors, indicators, and the like. The keys 201 may include a volume key, an on/off key, and the like.
Fig. 9 is a System block diagram of a System On Chip (SOC) 900 according to some embodiments of the present disclosure. In fig. 9, like parts have the same reference numerals. In addition, the dashed box is an optional feature for more advanced SOCs. In fig. 9, the SOC 900 includes: an interconnect unit 950 coupled to the application processor 915; a system agent unit 970; a bus controller unit 980; an integrated memory controller unit 940; a set or one or more coprocessors 920 which may include integrated graphics logic, an ISP, an audio processor, and a video processor; sram cell 930; a direct memory access unit 960. In one embodiment, coprocessor 920 includes a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPU, a high-throughput MIC processor, embedded processor, or the like. Since the bandwidth is always the technical bottleneck of the SOC, applying the image compression-decompression method of the present application in the SOC helps to save the system bandwidth of the SOC, and can improve the image compression efficiency and the transmission efficiency.
Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of these implementations. Embodiments of the application may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices in a known manner. For purposes of this application, a processing system includes any system having a processor such as, for example, a Digital Signal Processor (DSP), a microcontroller, an ISP, an application specific integrated circuit ASIC, or a microprocessor.
The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code can also be implemented in assembly or machine language, if desired. Indeed, the mechanisms described in this application are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.
In some cases, the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments may also be implemented as instructions carried by or stored on one or more transitory or non-transitory machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors. For example, the instructions may be distributed via a network or via other computer readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), including, but not limited to, floppy diskettes, optical disks, read-only memories (CD-ROMs), magneto-optical disks, read-only memories (ROMs), Random Access Memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or a tangible machine-readable memory for transmitting information (e.g., carrier waves, infrared digital signals, etc.) using the internet in an electrical, optical, acoustical or other form of propagated signal. Thus, a machine-readable medium includes any type of machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).
In the drawings, some features of the structures or methods may be shown in a particular arrangement and/or order. However, it is to be understood that such specific arrangement and/or ordering may not be required. Rather, in some embodiments, the features may be arranged in a manner and/or order different from that shown in the illustrative figures. In addition, the inclusion of a structural or methodical feature in a particular figure is not meant to imply that such feature is required in all embodiments, and in some embodiments, may not be included or may be combined with other features.
It should be noted that, in the embodiments of the apparatuses in the present application, each unit/module is a logical unit/module, and physically, one logical unit/module may be one physical unit/module, or may be a part of one physical unit/module, and may also be implemented by a combination of multiple physical units/modules, where the physical implementation manner of the logical unit/module itself is not the most important, and the combination of the functions implemented by the logical unit/module is the key to solve the technical problem provided by the present application. Furthermore, in order to highlight the innovative part of the present application, the above-mentioned device embodiments of the present application do not introduce units/modules which are not so closely related to solve the technical problems presented in the present application, which does not indicate that no other units/modules exist in the above-mentioned device embodiments.
It is noted that, in the examples and descriptions of this patent, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the use of the verb "comprise a" to define an element does not exclude the presence of another, same element in a process, method, article, or apparatus that comprises the element.
While the present application has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present application.

Claims (14)

1. An image compression method for an electronic device having a photographing function, the method comprising:
acquiring image data to be compressed and at least one gain factor of gain applied to the image to be compressed;
obtaining first intermediate data based on the image data to be compressed and the gain factor;
and compressing the first intermediate data to obtain compressed image data.
2. The method according to claim 1, wherein the image data to be compressed comprises a plurality of color channel groups arranged in a cycle; the arrangement mode of the color channels in each color channel group is the same as the number of the color channels; at least two color channels are included in each color channel group and arranged in a cross; the method further comprises:
the first intermediate data comprises the plurality of color channel groups which are circularly arranged, and each color channel data in the color channel groups is extracted and arranged to obtain second intermediate data;
compressing the second intermediate data to obtain third intermediate data;
and fusing the third intermediate data according to the arrangement mode of the color channels in the color channel group to obtain the compressed image data.
3. The method according to claim 2, wherein, in the case of applying a gain to the image to be compressed, dividing the image data to be compressed by a gain factor of the gain to obtain the first intermediate data;
and under the condition that more than two gains are applied to the image to be compressed, dividing the image data to be compressed by the product of gain factors of the more than two gains to obtain the first intermediate data.
4. The method of claim 3, wherein the color channel data comprises pixel values on color channels in pixel point data on the image to be compressed, and the extracting the color channel data from the color channel group for arrangement comprises:
and sequentially extracting pixel values on the color channels according to the scanning sequence of the pixel points on the image to be compressed, and arranging the pixel values on the color channels according to the scanning sequence of the pixel points to obtain the second intermediate data.
5. The method of claim 4, wherein the second intermediate data comprises pixel values on each of the color channels, and wherein compressing the second intermediate data comprises:
and compressing the pixel values on each color channel through differential preprocessing, quantization and entropy coding in sequence to obtain the third intermediate data.
6. The method of any of claims 1 to 5, wherein the compressed image data comprises the cyclically arranged plurality of color channel groups.
7. A method of image decompression, the method comprising:
acquiring the compressed image data and the gain factor obtained by the image compression method according to claim 1;
decompressing the compressed image data to obtain the first intermediate data;
obtaining decompressed image data based on the first intermediate data and the gain factor.
8. The method of claim 7, wherein the compressed image data comprises a plurality of color channel groups arranged in a loop; the arrangement mode of the color channels in each color channel group is the same as the number of the color channels; at least two color channels are included in each color channel group and arranged in a cross; the method further comprises:
extracting each color channel data in the color channel group for arrangement to obtain third intermediate data;
decompressing the third intermediate data to obtain second intermediate data;
fusing the second intermediate data according to the arrangement mode of the color channels in the color channel group to obtain the first intermediate data;
and multiplying the first intermediate data by the gain factor to obtain the decompressed image data.
9. The method according to claim 8, wherein, in the case of applying a gain to the image to be compressed, the first intermediate data is multiplied by a gain factor of the gain to obtain the decompressed image data;
and under the condition that more than two gains are applied to the image to be compressed, multiplying the first intermediate data by the product of gain factors of the more than two gains to obtain the decompressed image data.
10. The method of claim 9, wherein the color channel data comprises pixel values on each color channel in each pixel point data on the compressed image data, and wherein extracting each color channel data in the color channel group for arrangement comprises:
and sequentially extracting pixel values on the color channels according to the scanning sequence of the pixel points of the image to be compressed, and arranging the pixel values on the color channels according to the scanning sequence of the pixel points to obtain third intermediate data.
11. The method of claim 10, wherein the third intermediate data comprises pixel values on each of the color channels, and wherein decompressing the third intermediate data comprises: sequentially performing entropy decoding, inverse quantization and inverse difference preprocessing on the pixel values on each color channel for decompression to obtain the second intermediate data, wherein the second intermediate data is obtained
The entropy decoding is the inverse of the entropy encoding, the inverse quantization is the inverse of the quantization, and the inverse differential pre-processing is the inverse of the differential pre-processing.
12. The method according to any one of claims 7 to 11, wherein the decompressed image data is the same as the image data to be compressed.
13. A readable medium having stored thereon instructions which, when executed on an electronic device, cause the electronic device to perform the image compression method of any one of claims 1 to 5 or cause the electronic device to perform the image decompression method of any one of claims 7 to 11.
14. An electronic device, comprising:
a memory for storing instructions for execution by one or more processors of the electronic device, an
A processor, being one of processors of an electronic device, for performing the image compression method of any one of claims 1 to 5 and the image decompression method of any one of claims 7 to 11.
CN202011015710.3A 2020-09-24 2020-09-24 Image compression and decompression method, readable medium and electronic device thereof Pending CN112135150A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011015710.3A CN112135150A (en) 2020-09-24 2020-09-24 Image compression and decompression method, readable medium and electronic device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011015710.3A CN112135150A (en) 2020-09-24 2020-09-24 Image compression and decompression method, readable medium and electronic device thereof

Publications (1)

Publication Number Publication Date
CN112135150A true CN112135150A (en) 2020-12-25

Family

ID=73840950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011015710.3A Pending CN112135150A (en) 2020-09-24 2020-09-24 Image compression and decompression method, readable medium and electronic device thereof

Country Status (1)

Country Link
CN (1) CN112135150A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6898310B1 (en) * 1998-07-03 2005-05-24 Tadahiro Ohmi Image signal processing method, image signal processing system, storage medium, and image sensing apparatus
WO2019142821A1 (en) * 2018-01-16 2019-07-25 株式会社ニコン Encoder, decoder, encoding method, decoding method, encoding program, and decoding program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6898310B1 (en) * 1998-07-03 2005-05-24 Tadahiro Ohmi Image signal processing method, image signal processing system, storage medium, and image sensing apparatus
WO2019142821A1 (en) * 2018-01-16 2019-07-25 株式会社ニコン Encoder, decoder, encoding method, decoding method, encoding program, and decoding program

Similar Documents

Publication Publication Date Title
US9686493B2 (en) Image capture accelerator
JP5337707B2 (en) Method and system for image preprocessing
US8619866B2 (en) Reducing memory bandwidth for processing digital image data
US11062432B2 (en) Method and device for reconstructing an HDR image
US10784892B1 (en) High throughput hardware unit providing efficient lossless data compression in convolution neural networks
US11741585B2 (en) Method and device for obtaining a second image from a first image when the dynamic range of the luminance of the first image is greater than the dynamic range of the luminance of the second image
EP4254964A1 (en) Image processing method and apparatus, device, and storage medium
KR20200002029A (en) Method and device for color gamut mapping
CN111741302A (en) Data processing method and device, computer readable medium and electronic equipment
CN111696039B (en) Image processing method and device, storage medium and electronic equipment
CN111738951A (en) Image processing method and device
US20110116725A1 (en) Data compression method and data compression system
JP4561649B2 (en) Image compression apparatus, image compression program and image compression method, HDR image generation apparatus, HDR image generation program and HDR image generation method, image processing system, image processing program and image processing method
US20160337650A1 (en) Color space compression
US11303805B2 (en) Electronic device for compressing image by using compression attribute generated in image acquisition procedure using image sensor, and operating method thereof
EP3557872A1 (en) Method and device for encoding an image or video with optimized compression efficiency preserving image or video fidelity
JP4302661B2 (en) Image processing system
CN112135150A (en) Image compression and decompression method, readable medium and electronic device thereof
CN113364964B (en) Image processing method, image processing apparatus, storage medium, and terminal device
WO2023246655A1 (en) Image encoding method and apparatus, and image decoding method and apparatus
WO2023005336A1 (en) Image processing method and apparatus, and storage medium and electronic device
KR20140071867A (en) Apparatus, method and program of image processing
WO2024078403A1 (en) Image processing method and apparatus, and device
KR100834357B1 (en) Device and method for compressing image data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201225

RJ01 Rejection of invention patent application after publication