CN115695821A - Image compression method and device, image decompression method and device, and storage medium - Google Patents

Image compression method and device, image decompression method and device, and storage medium Download PDF

Info

Publication number
CN115695821A
CN115695821A CN202110859790.9A CN202110859790A CN115695821A CN 115695821 A CN115695821 A CN 115695821A CN 202110859790 A CN202110859790 A CN 202110859790A CN 115695821 A CN115695821 A CN 115695821A
Authority
CN
China
Prior art keywords
image
sub
region
processed
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110859790.9A
Other languages
Chinese (zh)
Inventor
王友学
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Beijing BOE Technology Development Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Beijing BOE Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd, Beijing BOE Technology Development Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202110859790.9A priority Critical patent/CN115695821A/en
Publication of CN115695821A publication Critical patent/CN115695821A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure provides an image compression method and device, an image decompression method and device and a storage medium. The image compression method comprises the following steps: obtaining an original image to be processed; wherein the original image comprises: a target image area and other image areas than the target image area; acquiring feature data of the target image area based on the original image, and acquiring compressed data of the other image areas; wherein the feature data of the target image region includes: shape information of the target image area, position information of the target image area, and gray scale information of the target image area; and generating and outputting compressed data corresponding to the original image based on the feature data of the target image area and the compressed data of the other image area.

Description

Image compression method and apparatus, image decompression method and apparatus, and storage medium
Technical Field
The embodiments of the present disclosure relate to, but not limited to, the field of image processing technologies, and in particular, to an image compression method and apparatus, an image decompression method and apparatus, and a storage medium.
Background
At present, in the scenes of medical science, industrial detection and the like, in the aspects of real-time transmission, data processing, data storage and the like, as the number of image data to be processed increases, for example, when multi-frame image data in a video is transmitted in real time, the transmission space and the storage space occupied by the multi-frame image data are large due to the influence of factors such as an image compression mode and the like, and the problem of poor real-time transmission effect of the data exists.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
In a first aspect, an embodiment of the present disclosure provides an image compression method, including: obtaining an original image to be processed; wherein the original image comprises: a target image area and other image areas than the target image area; acquiring feature data of the target image area based on the original image, and acquiring compressed data of the other image areas; wherein the feature data of the target image region comprises: shape information of the target image area, position information of the target image area, and gradation information of the target image area; and generating and outputting compressed data corresponding to the original image based on the feature data of the target image area and the compressed data of the other image area.
In a second aspect, an embodiment of the present disclosure provides an image compression apparatus, including: a processor and a memory storing a computer program operable on the processor, wherein the processor implements the steps of the image compression method described in the above embodiments when executing the program.
In a third aspect, the present disclosure provides a computer-readable storage medium, which includes a stored program, where when the program runs, a device in which the storage medium is located is controlled to execute the steps of the image compression method in the foregoing embodiments.
In a fourth aspect, an embodiment of the present disclosure provides an image decompression method, including: receiving compressed data corresponding to an original image; wherein, the compressed data corresponding to the original image comprises: feature data of a target image area in the original image, compressed data of other image areas except the target image area in the original image, shape information of the target image area, position information of the target image area, and gray scale information of the target image area; obtaining a first image containing the other image area based on the compressed data of the other image area; obtaining a second image containing the target image area based on the characteristic data of the target image area; and combining the first image and the second image to obtain the original image.
In a fifth aspect, an embodiment of the present disclosure provides an image decompression apparatus, including: a processor and a memory storing a computer program operable on the processor, wherein the processor implements the steps of the image decompression method described in the above embodiments when executing the program.
In a sixth aspect, the present disclosure provides a computer-readable storage medium, which includes a stored program, where when the program runs, an apparatus where the storage medium is located is controlled to execute the steps of the image decompression method described in the foregoing embodiments.
When an original image is subjected to image compression, a target image area in the original image is represented by feature data such as shape information, position information and gray scale information, image compression can be achieved in a mode of describing areas, and compressed data generated by an image compression mode of describing pixel by pixel are used for representing other image areas in the original image for other image areas except the target image area. Therefore, by reducing the information space occupied by the target image area, a higher compression ratio can be realized, the storage space and the transmission space can be saved, and real-time transmission can be realized.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the disclosure. Other advantages of the disclosure may be realized and attained by the instrumentalities and combinations particularly pointed out in the specification and the drawings.
Other aspects will be apparent upon reading and understanding the attached drawings and detailed description.
Drawings
The accompanying drawings are included to provide an understanding of the disclosed embodiments and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the example serve to explain the principles of the disclosure and not to limit the disclosure. The shapes and sizes of the various elements in the drawings are not to be considered as true proportions, but are merely intended to illustrate the present disclosure.
FIG. 1A is a schematic diagram of image processing in an image compression mode;
FIG. 1B is a diagram illustrating an image compression method;
FIG. 2 is a flow chart diagram of an image compression method in an exemplary embodiment of the present disclosure;
FIG. 3A is a schematic illustration of an original image in an exemplary embodiment of the present disclosure;
FIG. 3B is a diagram of a binary image corresponding to an original image in an exemplary embodiment of the disclosure;
FIG. 4 is a schematic flow chart diagram of an image decompression method in an exemplary embodiment of the present disclosure;
FIG. 5 is a schematic diagram of obtaining a curve representing a shape of a target image region in an exemplary embodiment of the present disclosure;
fig. 6 is a schematic diagram of an application scenario of an image compression method and an image decompression method in an exemplary embodiment of the present disclosure;
FIG. 7 is a schematic diagram of an image compression apparatus according to an exemplary embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an image decompression apparatus in an exemplary embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. Note that the embodiments may be implemented in a plurality of different forms. Those skilled in the art can readily appreciate the fact that the manner and content may be varied into a variety of forms without departing from the spirit and scope of the present disclosure. Therefore, the present disclosure should not be construed as being limited to the contents described in the following embodiments. The embodiments and features of the embodiments in the present disclosure may be arbitrarily combined with each other without conflict.
In the drawings of the present disclosure, the size of each component, the thickness of a layer, or a region may be exaggerated for clarity. Therefore, one aspect of the present disclosure is not necessarily limited to the dimensions, and the shapes and sizes of the respective components in the drawings do not reflect a true scale. Further, the drawings schematically show ideal examples, and one embodiment of the present disclosure is not limited to the shapes, numerical values, and the like shown in the drawings.
In the exemplary embodiments of the present disclosure, ordinal numbers such as "first", "second", "third", and the like are provided to avoid confusion of constituent elements, and are not limited in number.
In the exemplary embodiments of the present disclosure, words such as "middle", "upper", "lower", "front", "rear", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicating orientations or positional relationships are used for convenience in describing positional relationships of constituent elements with reference to the drawings, only for convenience in describing the present specification and simplifying the description, but not for indicating or implying that the referred device or element must have a specific orientation, be configured and operated in a specific orientation, and thus, should not be construed as limiting the present disclosure. The positional relationship of the components is changed as appropriate in accordance with the direction in which each component is described. Therefore, the words and phrases described in the specification are not limited thereto, and may be replaced as appropriate depending on the case.
In the exemplary embodiments of the present disclosure, the terms "mounted," "connected," and "connected" are to be construed broadly unless otherwise explicitly specified or limited. For example, it may be a fixed connection, or a removable connection, or an integral connection; can be a mechanical connection, or an electrical connection; either directly or indirectly through intervening components, or both may be interconnected. The specific meaning of the above terms in the present disclosure can be understood in a specific case to those of ordinary skill in the art.
"about" in the exemplary embodiments of the present disclosure refers to a numerical value that is not strictly limited, allowing for process and measurement error.
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure.
In the field of image processing technology, as shown in fig. 1A, images are generally represented in a pixel-by-pixel order, from left to right (i.e., along the second direction X) and from top to bottom (i.e., along the first direction Y). In the image compression processing, the processing is also performed one by one in the order of pixels. The method for approximating the real images one by using the pixels has the advantages of simple processing method, mature technology, good effect and the like, and becomes a mainstream method in the field of images. For example, in a conventional compressed image representation, each pixel may be composed of RGB or YUV. For example, taking a YUV422 encoding method as an image compression method, as shown in fig. 1B, each frame of image in the video is transmitted and stored in a manner that the gray-level value of each pixel included in the current frame of image occupies a storage space of 2 bytes. However, for a large-size 16-bit bitmap (for example, the size is 3000 pixels by 3000 pixels), the current image compression method using pixel-by-pixel description still has a large storage space required by the compressed data of the image, and cannot meet the requirement of real-time transmission.
The embodiment of the disclosure provides an image compression method which can be applied to an image compression device. In practical applications, images (e.g., X-ray detection images) in the fields of medicine and industrial inspection generally have the characteristics of single content in most image areas, simple shape and texture changes, and large required storage space, so that the image compression method can be applied to the fields of medicine and industrial inspection.
Fig. 2 is a schematic flowchart of an image compression method in an exemplary embodiment of the present disclosure, and as shown in fig. 2, the image compression method may include:
step 21: an original image to be processed is obtained.
Wherein the original image includes: a target image area and other image areas than the target image area.
Step 22: based on the original image, feature data of a target image area in the original image is acquired, and compressed data of other image areas in the original image except the target image area is acquired.
In one exemplary embodiment, the original image may be a color image, or may be a grayscale image. Here, the embodiment of the present disclosure does not limit this.
In an exemplary embodiment, the raw image may be a medical image, for example, an X-ray detection image. The embodiments of the present disclosure do not limit this.
In one exemplary embodiment, the original image may be a 16-bit bitmap. The embodiments of the present disclosure do not limit this.
The feature data of the target image area may include: shape information of the target image area, position information of the target image area, and gradation information of the target image area.
In an exemplary embodiment, the target image area may refer to an image area in which the shape and texture are simple to change (e.g., the shape can be described by a curve) and the content is relatively single (e.g., the color is relatively single, i.e., the gray value is relatively small) in the original image.
In one exemplary embodiment, the shape information of the target image area may include: a curve parameter for representing a shape of the target image area, wherein the curve parameter comprises: curve equation information and curve pose information.
In an exemplary embodiment, a curve may be considered as the intersection of two curved surfaces. Then, the curve equation information may include: the surface equation of two intersecting surfaces, and the curve pose information may include: attitude information of an intersection curve between two intersecting curved surfaces, for example, an attitude angle between the intersection curve and a coordinate axis. The embodiments of the present disclosure do not limit this.
In an exemplary embodiment, the number of target image areas may be one or more, for example, two, three, or four, etc. Here, the embodiment of the present disclosure does not limit this.
In an exemplary embodiment, the compressed data of the other image area may refer to data obtained by describing the image data corresponding to the other image area pixel by using a conventional image data compression method. For example, conventional image data compression methods may include, but are not limited to: huffman coding, run-length coding, arithmetic coding, discrete cosine transform coding, hybrid coding, and the like. Here, the embodiment of the present disclosure does not limit this.
Step 23: and generating and outputting compressed data corresponding to the original image based on the characteristic data of the target image area and the compressed data of the other image areas.
In this way, when an original image is subjected to image compression, the image compression method provided in the embodiment of the present disclosure describes, for a target image region, the target image region in the original image through feature data such as shape information, position information, and grayscale information, and thus it is possible to realize that image compression is realized in a region-by-region description manner, and for other image regions except the target image region, compressed data generated in a pixel-by-pixel description image compression manner is used to represent the other image regions in the original image. Therefore, by reducing the information space occupied by the target image area, a higher compression ratio can be realized, the storage space and the transmission space can be saved, and real-time transmission can be realized.
In one exemplary embodiment, step 22 may comprise:
step 221: acquiring feature data of a target image area based on an original image;
step 222: extracting other image areas from the original image based on the position information of the target image area;
step 223: and carrying out image compression processing on other image areas to obtain compressed data of the other image areas.
In one exemplary embodiment, the original image may be divided into two parts by using a binarization process of the image: a target image area and other image areas than the target image area. Of course, an image processing method may also be used to determine the target image area, which is not limited herein by the embodiment of the present disclosure.
Next, an example of dividing an original image into a target image area and other image areas other than the target image area by using binarization processing of the image will be described.
In an exemplary embodiment, step 221 may include:
step 2211: performing binarization processing on the original image based on the gray value of the pixels of the original image to obtain a binary image corresponding to the original image;
step 2212: performing image segmentation processing on the binary image, determining a region to be processed in the binary image corresponding to the target image region, and acquiring feature data of the region to be processed in the binary image;
the feature data of the region to be processed in the binary image may include: the image processing method includes the steps of obtaining shape information of a region to be processed in a binary image, position information of the region to be processed in the binary image, and gray scale information of the region to be processed in the binary image.
In an exemplary embodiment, a curve may be considered as the intersection of two curved surfaces. For example, the shape information of the region to be processed in the binary image may include: the surface equation of two intersecting surfaces, and the curve pose information may include: attitude information of an intersection curve between two intersecting curved surfaces, for example, an attitude angle between the intersection curve and a coordinate axis. Here, the embodiment of the present disclosure does not limit this.
Step 2213: and determining the characteristic data of the region to be processed as the characteristic data of the target image region.
Here, a Binary Image (also referred to as a Binary Image) may refer to an Image having only two possible values or grayscale states per pixel. For example, the binarized value of each pixel in the binary image may be one of a preset first value for representing black and a preset second value for representing white. For example, in general, the Binarization value may be 0 or 1,1 may represent black, 0 may represent white, and correspondingly, the Binarization (Binarization) process of the image may be to set the gradation value of a pixel on the image to 0 or 1. For example, the binarization value may be 0 or 255, 255 may represent black, 0 may represent white, and correspondingly, the binarization process of the image may be to set the gradation value of the pixel on the image to 0 or 255. Of course, other numerical values, letters or symbols can be used instead in some embodiments, and the same meaning can be obtained. Here, the embodiment of the present disclosure does not limit this.
In an exemplary embodiment, the number of binary images corresponding to the original image may be one or more, for example, two, three, four, five, or six, etc. Here, the embodiment of the present disclosure does not limit this.
For example, taking the original image as shown in fig. 3A as an example, and the original image corresponding to a binary image as shown in fig. 3B as an example, the original image may include: the target image area 31, the binary image may include: the connected regions 32, 33, and 34, since the total number of pixels in the connected regions 32 is greater than the preset threshold, the connected regions 32 in the binary image may be determined as the regions to be processed in the binary image corresponding to the target image region 31 in the original image, and then step 2213 may include: acquiring a gray value interval corresponding to the original image as gray information of the target image area 31 based on the gray value of the pixel of the original image; acquiring a curve parameter indicating the shape of a region to be processed (for example, connected region 32) as shape information of the corresponding target image region 31; contour coordinates of a region to be processed (for example, connected region 32) are acquired as position information of the corresponding target image region 31. In this way, feature data of the target image region can be obtained.
In an exemplary embodiment, taking the number of binary images corresponding to the original image as an example, step 2211 may include the following steps A1 to A2:
step A1: obtaining a gray value interval corresponding to the original image based on the gray value of the pixel of the original image;
step A2: and performing binarization processing on the original image by taking the average value of the maximum gray value and the minimum gray value of the gray value interval corresponding to the original image as a threshold value to obtain a binary image corresponding to the original image.
For example, traversing each pixel in the original image, comparing the gray value of each pixel with the mean value of the maximum gray value and the minimum gray value of the gray value interval corresponding to the original image, according to the comparison result, determining the binary value of the pixel with the gray value greater than the mean value as a preset first value for representing black, and determining the binary value of the pixel with the gray value not greater than the mean value as a preset second value for representing white. Thus, a binary image corresponding to the original image can be obtained.
For example, taking an example that the binary value may be 0 or 1, that is, the preset first value may be 1, and the preset second value may be 0, the binarization processing method shown in the following formula (1) may be adopted to perform binarization processing on the original image, so as to obtain a binary image corresponding to the original image.
Figure BDA0003185433760000081
In formula (1), f (x, y) represents the original gray-scale value of the pixel (x, y) in the original image, g (x, y) represents the binary value (i.e., the gray-scale value after the binarization process) of the pixel (x, y) in the binary image, min represents the minimum gray-scale value of the gray-scale value interval corresponding to the original image, max represents the maximum gray-scale value of the gray-scale value interval corresponding to the original image, and avg represents the average value of the maximum gray-scale value and the minimum gray-scale value of the gray-scale value interval corresponding to the original image, i.e., avg represents the average value of the maximum gray-scale value and the minimum gray-scale value of the gray-scale value interval corresponding to the original image
Figure BDA0003185433760000091
In an exemplary embodiment, taking the number of binary images corresponding to the original image as an example of multiple numbers, step 2211 may include the following steps B1 to B3:
step B1: obtaining a gray value interval corresponding to the original image based on the gray value of the pixel of the original image;
and step B2: dividing a gray value interval corresponding to an original image into a plurality of sub gray value intervals;
and step B3: and respectively carrying out binarization processing on the original image based on the sub gray value intervals to obtain a plurality of sub binary images corresponding to the sub gray value intervals one by one.
Then, the obtained binary image corresponding to the original image may include: and the sub binary images correspond to the sub gray value intervals one by one.
In an exemplary embodiment, the multiple sub-gray value intervals divided based on the gray value interval corresponding to the original image may be a continuous sequence, that is, the minimum gray value of the next sub-gray value interval may be an adjacent element of the maximum gray value of the previous sub-gray value interval, for example, the multiple sub-gray value intervals include: the gray scale display device comprises a1 st sub-gray scale value interval, a2 nd sub-gray scale value interval and a 3 rd sub-gray scale value interval which are sequentially arranged from small to large, wherein the minimum gray scale value of the 2 nd sub-gray scale value interval can be an adjacent element of the maximum gray scale value of the 1 st sub-gray scale value interval, and the minimum gray scale value of the 3 rd sub-gray scale value interval can be an adjacent element of the maximum gray scale value of the 2 nd sub-gray scale value interval.
In an exemplary embodiment, the dividing of the plurality of sub-gray value intervals based on the gray value interval corresponding to the original image may be performed in a uniform dividing manner, or may be performed in a non-uniform dividing manner according to the characteristics of the image. The embodiments of the present disclosure do not limit this.
For example, taking the gray scale interval corresponding to the original image as [0, 255] as an example, and dividing the gray scale interval corresponding to the original image into 5 sub-gray scale intervals as an example, for example, a uniform division manner can be adopted, the level boundary sequence may be {0, 51, 102, 153, 205, 255}, and then the 1 st to 5 th sub-gray scale intervals may be: [0, 51 ], [51, 102 ], [102, 153 ], [153, 205) and [205, 255], i.e., [0, 50], [51, 101], [102, 152], [153, 204] and [205, 255]. For example, a non-uniform division may be adopted, and the level boundary sequence may be {0, 51, 132, 153, 185, 255}, and then the 1 st to 5 th sub-gray value intervals may be: [0, 50], [51, 131], [132, 152], [153, 184] and [185, 255]. Here, the embodiment of the present disclosure does not limit this.
In an exemplary embodiment, the binary image corresponding to the original image may include: for example, the sub binary images corresponding to the sub gray scale value intervals one by one, and step B3 may include: for each sub-gray value interval, the following operations are performed on the original image: traversing each pixel in the original image, and respectively comparing the gray value of each pixel with the maximum gray value and the minimum gray value of the sub-gray value interval; and according to the comparison result, determining the binary value of the pixel of which the gray value is in the sub gray value interval as a preset first value for representing black, determining the binary value of the pixel of which the gray value is not in the sub gray value interval as a preset second value for representing white, and obtaining the sub binary image corresponding to the sub gray value interval.
For example, taking that the binary value may be 0 or 1, that is, the preset first value may be 1, the preset second value may be 0, and taking the gray scale value interval corresponding to the original image as an example, the gray scale value interval corresponding to the original image is divided into N sub gray scale value intervals, the binarization processing method shown in the following formula (2) may be adopted to perform binarization processing on the original image based on the ith sub gray scale value interval, so as to obtain the ith sub-binary image corresponding to the original image. Thus, N sub-binary images corresponding to the original image can be obtained.
Figure BDA0003185433760000101
In formula (2), f (x, y) represents the original gray-scale value of the pixel (x, y) in the original image, g i (x, y) represents a binarization value (i.e. a gray value after binarization processing) of a pixel (x, y) in the ith sub-binary image, min _ i represents a minimum gray value in an ith sub-gray value interval, and max _ i represents a maximum gray value in the ith sub-gray value interval, wherein a value range of i is 1 to N, and N is a positive integer greater than 1.
In an exemplary embodiment, the binary image corresponding to the original image may include: for example, the sub binary images corresponding to the sub gray value intervals one to one, the to-be-processed region in the binary image corresponding to the original image may include: the area to be processed in each sub-binary image, then step 2212 may comprise: for each sub-binary image the following operations are performed:
step C1: performing connected domain analysis on the sub binary image to obtain a plurality of connected regions in the sub binary image;
and C2: calculating the total number of pixels of a plurality of connected regions in the sub-binary image;
and C3: and determining the connected regions with the total pixel number larger than a preset threshold value in the plurality of connected regions in the sub-binary image as the regions to be processed in the sub-binary image corresponding to the target image region.
In this way, a region with a large area (i.e., a connected region with a total pixel number greater than a preset threshold) in the binary image can be screened out, so that the region with the large area can be represented by feature data such as shape information, position information and gray scale information, and a region with a small area (i.e., a connected region with a total pixel number not greater than a preset threshold) in the binary image can be removed, so that a smaller region can be represented by conventional compressed data. Thus, the processing speed and accuracy of image compression can be improved.
Here, the Connected Component may refer to an image Region (Blob) composed of pixels having the same pixel value (for example, a binarized value of 1) and being adjacent in position in the binary image. For example, a plurality of Connected components in the binary image may be found and labeled by a Connected Component Analysis (Connected Component Labeling) algorithm. For example, the connected component analysis algorithm may include, but is not limited to, two-Pass (Two-Pass) algorithm or Seed-Filling (Seed-Filling) algorithm, among others. For example, the neighborhood of pixels employed in the connected component analysis algorithm may include, but is not limited to, 4-neighborhoods, 8-neighborhoods, and the like. Here, the method of extracting the connected component from the binary image is not limited in the embodiments of the present disclosure.
In an exemplary embodiment, step C2 may include: for each connected region of the sub binary image, the following operations are carried out: carrying out edge detection processing on the connected region to obtain the contour coordinates of the connected region; and calculating the total pixel number of the connected region according to the contour coordinates of the connected region. Thus, the total number of pixels to the connected region in each sub-binary image can be calculated.
In an exemplary embodiment, the preset threshold may be an empirical value determined experimentally, or may be a numerical value calculated from size information (including pixel width W and pixel height H) of the original image, or the like. Here, the method for determining the preset threshold in the embodiment of the present disclosure is not limited, and may be determined by a person skilled in the art according to an actual application scenario.
For example, a preset threshold corresponding to the size information (including the pixel width W and the pixel height H) of the original image is acquired based on a mapping relationship between the image size stored in advance and the preset threshold.
For example, the larger one of the pixel width W and the pixel height H of the original image is determined as a numerator x, and a preset threshold C is calculated according to a preset constant y and by a formula C = x/y, where y is a positive integer smaller than x, and C is a positive integer.
In an exemplary embodiment, the binary image corresponding to the original image may include: for example, the sub binary images corresponding to the sub gray value intervals one to one, the to-be-processed region in the binary image corresponding to the original image may include: the region to be processed in each sub-binary image, and thus, the feature data of the target image region may include: feature data of the region to be processed in each sub-binary image, then, step 2213 may include the following steps D1 to D4:
step D1: determining a sub-gray value interval corresponding to each sub-two-value image as gray information of a region to be processed in each sub-two-value image;
step D2: performing edge detection processing on the region to be processed in each sub two-value image to obtain the contour coordinates of the region to be processed in each sub two-value image;
and D3: determining the contour coordinates of the area to be processed in each sub two-value image as the position information of the area to be processed in each sub two-value image;
step D4: performing curve approximation processing on the region to be processed in each sub-binary image based on the contour coordinates of the region to be processed in each sub-binary image to obtain curve parameters for representing the shape of the region to be processed in each sub-binary image; wherein, the curve parameters include: curve equation information and curve attitude information;
step D5: and determining a curve parameter for representing the shape of the area to be processed in each sub-two-value image as the shape information of the area to be processed in each sub-two-value image.
Here, the outline (which may also be referred to as an edge or a boundary, etc.) of the region to be processed may refer to a portion of the sub-binary image where the gray value (i.e., the luminance) of the region to be processed changes significantly, i.e., a gray value where one gray value changes sharply in a small buffer region to another gray value where the gray value is different greatly. For example, an edge detection algorithm may be used to perform edge detection processing on the to-be-processed region corresponding to the sub-binary image. For example, the edge detection algorithm may include, but is not limited to, a Hough transform algorithm or a Canny edge detection algorithm, among others. Here, the embodiment of the present disclosure does not limit this.
In an exemplary embodiment, the contour coordinates of the region to be processed may include: the coordinates of at least some of the contour points in the series of contour points of the outer contour of the area to be treated can be formed. Namely, the contour coordinates of the region to be processed can represent the position of the region to be processed in the sub-binary image, and further can represent the position of the target image region corresponding to the region to be processed in the original image.
For example, the gray value interval corresponding to the original image may be divided into 5 sub-gray value intervals, including: for example, if the 1 st to 5 th sub-gray value intervals are taken, then, the binarization processing is performed on the original image according to the 1 st to 5 th sub-gray value intervals, and the obtaining of the binary image corresponding to the original image may include: the 5 sub-binary images, i.e., the 1 st sub-binary image (corresponding to the 1 st sub-gray value interval) to the 5 th sub-binary image (corresponding to the 5 th sub-gray value interval). Then, taking the example that the to-be-processed region corresponding to the 1 st sub-binary image includes a connected region 1 and a connected region 2, the 2 nd sub-binary image has no to-be-processed region, the to-be-processed region corresponding to the 3 rd sub-binary image includes a connected region 3, the to-be-processed region corresponding to the 4 th sub-binary image includes a connected region 4, and the to-be-processed region corresponding to the 5 th sub-binary image includes a connected region 5 and a connected region 6, the target image region includes: taking the to-be-processed region 1 and the to-be-processed region 2 in the 1 st sub-binary image, the to-be-processed region 3 in the 3 rd sub-binary image, the to-be-processed region 4 in the 4 th sub-binary image, and the to-be-processed region 5 and the to-be-processed region 6 in the 5 th sub-binary image as examples, then step D1 may include: acquiring a1 st sub-gray value interval corresponding to the 1 st sub-binary image as gray information of a to-be-processed area 1 and gray information of a to-be-processed area 2; acquiring a 3 rd sub gray value interval corresponding to the 3 rd sub two-value image as gray information of the region 3 to be processed; acquiring a 4 th sub-gray value interval corresponding to the 4 th sub-binary image as gray information of the region to be processed 4; and acquiring a 5 th sub-gray value interval corresponding to the 5 th sub-binary image as the gray information of the to-be-processed area 5 and the gray information of the to-be-processed area 6. Then, step D3 may include: acquiring outline coordinates of the connected region 1 as position information of the region 1 to be processed; acquiring outline coordinates of the connected region 2 as position information of the region 2 to be processed; acquiring outline coordinates of the connected region 3 as position information of the region 3 to be processed; acquiring the contour coordinates of the communicated area 4 as the position information of the area 4 to be processed; acquiring outline coordinates of the connected region 5 as position information of the region 5 to be processed; the contour coordinates of the connected region 6 are acquired as the position information of the region to be processed 6. Then, step D5 may include: acquiring a curve parameter for representing the shape of the connected region 1 as shape information of the region 1 to be processed; acquiring a curve parameter for representing the shape of the connected region 2 as shape information of the region to be processed 2; acquiring a curve parameter for representing the shape of the connected region 3 as shape information of the region 3 to be processed; acquiring a curve parameter for representing the shape of the connected region 4 as shape information of the region to be processed 4; acquiring a curve parameter for representing the shape of the connected region 5 as shape information of the region to be processed 5; curve parameters for representing the shape of the connected region 6 are acquired as the shape information of the region to be processed 6.
In an exemplary embodiment, one curve may be considered as the intersection of two curved surfaces. Then, curve approximation processing may be performed on the to-be-processed region corresponding to each sub-binary image in a spatial curved surface intersection manner, so as to obtain curve equation information of the intersection curve and curve posture information of the intersection curve, which are used as curve parameters for representing the shape of the to-be-processed region in each sub-binary image. Here, the intersection of the two spatial curved surfaces is a curve for representing the shape of the region to be processed.
In an exemplary embodiment, step D4 may include: determining a fitting curve corresponding to the contour of the region to be processed in each sub-two-value image based on the contour coordinates of the region to be processed in each sub-two-value image; converting the fitted curve corresponding to the outline of the region to be processed in each sub-binary image into a corresponding first space curved surface; determining a minimum circumscribed circle corresponding to the contour of the region to be processed in each sub-two-value image based on the contour coordinates of the region to be processed in each sub-two-value image; converting the minimum circumscribed circle corresponding to the outline of the region to be processed in each sub-binary image into a corresponding second space curved surface; the method comprises the steps of adjusting surface parameters until an intersecting curve between an adjusted first space curved surface and an adjusted second space curved surface approaches to the contour of a region to be processed in each sub two-value image by intersecting the first space curved surface and the second space curved surface corresponding to the contour of the region to be processed in each sub two-value image, and obtaining curve equation information and curve posture information corresponding to the intersecting curve; and determining curve equation information and curve posture information corresponding to the intersected curves as curve parameters for representing the shape of the region to be processed in each sub-binary image.
In an exemplary embodiment, a fitted curve corresponding to the contour of the region to be processed in each sub two-valued image may be determined based on the contour coordinates of the region to be processed in each sub two-valued image by a curve fitting method. For example, the curve fitting method may include, but is not limited to, a least squares polynomial curve fitting method, and the like. Here, the embodiment of the present disclosure does not limit this.
In an exemplary embodiment, the fitted curve corresponding to the contour of the region to be processed in each sub two-valued image may be converted into a first spatial curved surface (e.g., expanded from a parabola to a paraboloid) and the minimum circumcircle corresponding to the contour of the region to be processed in each sub two-valued image may be converted into a second spatial curved surface (e.g., expanded from a circle to a cylindrical surface) by a dimension expansion method (i.e., expanded from a curve in a two-dimensional coordinate system to a curved surface in a three-dimensional coordinate system). Here, the embodiment of the present disclosure does not limit this.
In an exemplary embodiment, step 23 may include: encapsulating and representing the feature data of the target image area into a first image data block by using a first preset data structure which is uniformly and specially used, wherein the first image data block can comprise: first indication information for indicating that the compression type is an image area represented based on a spatial graphic mode, second indication information for indicating the position of the feature data of the target image area in the first image data block, and feature data of the target image area; and encapsulating and representing the compressed data of the other image areas into a second image data block by using a second preset data structure which is specially used in a unified way, wherein the second image data block can comprise: third indication information for indicating that the compression type is an image area represented based on a conventional compressed data manner, second indication information for indicating a position of compressed data of the other image area in the second image data block, and compressed data of the other image area; and sequentially serializing the first image data block and the second image data block to form compressed data corresponding to the original image. Thus, the compressed data corresponding to the original image can be saved in the storage medium, or the compressed data corresponding to the original image can be transmitted in real time. Here, the data format of the compressed data corresponding to the original image is not limited in the embodiment of the present disclosure.
For example, taking an original image to be processed as an X-ray detection image shown in fig. 3A as an example, for an irregular cavity in the circle 31, the pixel width of a target image region corresponding to the irregular cavity is about 72 pixels, and the pixel height of a target image region corresponding to the irregular cavity in the circle is about 56 pixels, so that the RGB storage space of the target image region corresponding to the irregular cavity is about 72 × 56 × 3=12096 bytes (Byte), which is about 12K, that is, 0.12MB. By using the image compression method provided by the embodiment of the present disclosure, when an image is compressed, the target image area corresponding to the irregular cavity is represented by using the feature data such as shape information, position information, and gray scale information, and the storage space of the target image area corresponding to the irregular cavity does not exceed 1000 bytes (Byte), that is, is less than 1K. Therefore, the required transmission space and storage space are greatly reduced. Thus, the storage space required for the compressed data corresponding to the original image can be greatly reduced.
The embodiment of the disclosure also provides an image decompression method, which can be applied to an image decompression device. Fig. 4 is a flowchart illustrating an image decompression method in an exemplary embodiment of the present disclosure, and as shown in fig. 4, the image decompression method may include:
step 41: obtaining compressed data corresponding to an original image;
the compressed data corresponding to the original image may include: the feature data of the target image area in the original image and the compressed data of the other image areas except the target image area in the original image comprise: shape information of the target image area, position information of the target image area, and gradation information of the target image area.
Step 42: obtaining a first image containing the other image area based on the compressed data of the other image area;
step 43: obtaining a second image containing the target image area based on the characteristic data of the target image area;
and step 44: and combining the first image and the second image to obtain an original image.
In an exemplary embodiment, the compressed data corresponding to the original image may include: a first image data block encapsulated with a first preset data structure and a second image data block encapsulated with a second preset data structure, wherein the first image data block may include: first indication information for indicating that the compression type is an image area represented based on a spatial graphic mode, second indication information for indicating the position of the feature data of the target image area in the first image data block, and feature data of the target image area; the second image data block may include: third indication information for indicating that the compression type is an image area represented based on a conventional compressed data manner, second indication information for indicating a position of compressed data of the other image area in the second image data block, and compressed data of the other image area. Here, the data format of the compressed data corresponding to the original image is not limited in the embodiment of the present disclosure.
In an exemplary embodiment, the grayscale value of the image region except the other image region in the first image may be a preset second value (for example, the preset second value may be 0) for representing white, and the grayscale value of the image region except the target image region in the second image may be a preset second value (for example, the preset second value may be 0) for representing white, so that the grayscale values of the pixels at corresponding positions in the first image and the second image are added to obtain a restored image corresponding to the original image.
In an exemplary embodiment, taking as an example that the gray-scale value of the image area other than the target image area in the second image may be a preset second value (for example, the preset second value may be 0) for representing white, step 43 may include the following steps 431 to 434:
step 431: generating an initialization image:
wherein the gray value of the pixel of the initialization image may be a preset second value for representing white. For example, the preset second value may be 0. Here, the embodiment of the present disclosure does not limit this.
Step 432: generating a shape of the target image area at a position of the target image area in the initialization image based on the shape information of the target image area and the position information of the target image area;
step 433: and filling pixels in the shape of the target image area based on the gray scale information of the target image area to obtain a second image.
In an exemplary embodiment, step 433 may include: calculating the average value of the maximum gray value and the minimum gray value of the gray value interval; and taking the mean value as the filled gray value of the pixel in the shape of the target image area to obtain a second image. In this way, the original image obtained by combining the first image and the second image has a loss in precision compared with the original image before compression, but in the fields of medicine, industrial detection and the like where the requirements on image colors are not high, a large amount of storage space and transmission space can be saved.
The following describes processes of an image compression method and an image decompression method provided by embodiments of the present disclosure with specific examples.
S1: acquiring an original image to be processed, and acquiring a gray value interval (which may be called a color level range) corresponding to the original image: l and size information (including pixel width W and pixel height H) of the original image is acquired.
S2: and setting a hierarchical boundary sequence according to the characteristics of the image so as to divide the gray value interval corresponding to the original image into N sub-gray value intervals. Where N represents the total number of layers divided.
How to divide the N sub-gray value intervals can be understood by referring to the description in the foregoing example in the embodiment, and redundant description is not repeated here.
S3: and taking out an ith sub-gray value interval to be processed from the N sub-gray value intervals, and acquiring a minimum gray value min _ i and a maximum gray value max _ i of the ith sub-gray value interval. Wherein, the value range of i is 1 to N, and N is a positive integer greater than 1.
For example, in the first processing, the 1 st sub-gray scale value interval may be extracted from the N sub-gray scale value intervals. And sequentially increasing the number of the gray scale values until the N sub-gray scale value intervals are completed.
S4: and (4) taking the minimum gray value min _ i and the maximum gray value max _ i of the ith sub-gray value interval acquired in the step (3) as threshold values, and performing binarization processing on the original image by adopting a binarization processing method shown in the formula (2) to obtain the ith sub-binary image Gi corresponding to the ith sub-gray value interval. The binary image Gi is a graph shape of the original image distribution in the ith sub-gray value interval.
Here, since the original image is an image applied to the fields of medicine, industrial inspection, and the like, and such an image is a gray-scale and color discrete distribution image, the image quality is not significantly reduced in the case of appropriate layering.
S5: after the ith sub-binary image Gi is obtained, the contours S of a plurality of connected regions composed of "1" in the ith sub-binary image Gi and the array records of the contour coordinates CA of the contours S can be obtained by using an image segmentation method. These data are saved.
S6: and respectively obtaining the contour S of a connected region and the contour coordinate CA corresponding to the contour from the two arrays corresponding to the ith sub-binary image Gi, continuously calculating the total pixel number count of pixels in the contour according to the contour coordinate CA, and when the total pixel number count of the connected region is greater than a preset threshold value C, storing the contours S of the divided connected regions (namely, a region to be processed corresponding to the ith sub-binary image Gi) and the arrays of the contour coordinates CA corresponding to the connected regions into a first array CR1, or else, storing the contours S and the arrays of the contour coordinates CA into a second array CR 2.
S7: and repeating the processes from S3 to S6 until the contours of the recognizable regions of all the sub-binary images are recognized.
S8: after the above process, the shape of the contour S in the first queue CR1 is represented as a curved surface. For example, the method of surface representation may be: taking the shape of the contour S in the first queue CR1 as a target, intersecting two space curved surfaces, adjusting curve parameters to enable the shape of a curved surface intersecting curve to be close to the shape of the target contour (namely the shape of the intersecting curve between the adjusted first space curved surface and the adjusted second space curved surface approaches the shape of the contour of the region to be processed in each sub binary image), recording the surface equation of the two curved surfaces, the curve posture of the intersecting curve and the contour coordinate CA corresponding to the contour S, and storing the curve equation, the curve posture of the intersecting curve and the contour coordinate CA in the special data structure P.
For example, taking the image shown in fig. 3 as an example, for an irregular cavity in the circle, after curve approximation processing is performed on the shape of the contour of the target image region corresponding to the irregular cavity, curve equation information of the intersecting curves can be expressed by using a curved surface equation shown in the following formula (3). Here, since the irregular cavity in the circle is generally circular, the cylindrical surface and the paraboloid can be finally used as a tangent, and the shape of the target image area corresponding to the irregular cavity can be expressed by taking the intersection line L of the cylindrical surface and the paraboloid as a whole.
Figure BDA0003185433760000181
For example, as shown in fig. 5, by removing z from the cylindrical surface equation and x from the parabolic surface equation, the curved surface equation may be converted into the intersecting line L of the projection cylindrical surfaces, and then, the curved surface equation information of the curve representing the shape of the target image area corresponding to the irregular cavity may be represented by the curved surface equation shown in the following formula (4).
Figure BDA0003185433760000191
After the above steps, an approximate curve L of the contour shape of the target image area corresponding to the irregular cavity has been obtained, and after the curve L is rotated by a certain angle (i.e., curve posture information), a curve closer to the contour shape of the target image area corresponding to the original irregular cavity can be obtained.
Then, similar to the above steps, the curve L may be intersected with several two-dimensional curves, so that each intersection may be made to more approximate the shape of the cavity.
S9: and adding a two-dimensional curve according to the difference between the contour shape of the target image area corresponding to the irregular cavity and the obtained curve L, and intersecting the two-dimensional curve with the curve L to further enable the new intersection line L' to further approximate the contour shape of the target image area corresponding to the irregular cavity. And recording the new intersection line L' as a curve L.
S10: and repeating the previous two steps continuously until the curve L is consistent with the contour shape of the target image area corresponding to the irregular cavity. And parametrically recording curve parameters for representing the shape of the to-be-processed area corresponding to each sub-two-value image, wherein the curve parameters can include: curve equation information corresponding to the intersecting curve (i.e., an equation of a curved surface forming the intersecting curve), and attitude information of the intersecting curve (i.e., an attitude angle with respect to a coordinate system), and the like. And taking the contour coordinates of the to-be-processed area corresponding to each sub-two-value image as the position information of the to-be-processed area in each sub-two-value image to be recorded. And taking the sub-gray value interval corresponding to each sub-binary image as the gray information record of the to-be-processed area corresponding to each sub-binary image in the target image area.
S11: repeating the above S7 to S10 continuously until all the sub-gray value intervals are processed, and obtaining feature data of a target image area in the original image, where the feature data of the target image area includes: shape information of the target image area (including curve parameters for representing the shape of the target image area, for example, curve equation information and curve posture information corresponding to an intersecting curve, etc.), position information of the target image area, and a gradation value section corresponding to the target image area.
S12: the gap image region in the original image which is not processed in S7 to S10, and the small image region CR2 or the like which is not represented using this method are represented in accordance with a conventional image compression method, and compressed data of the other image regions in the original image except the target image region is obtained.
S13: and combining the characteristic data of the target image area in the original image and the compressed data of other image areas except the target image area in the original image into the compressed data corresponding to the original image. For storage, transmission, etc.
Thus, the compression process of the original image is completed.
Then, after compressed data corresponding to the original image is obtained, if necessary, the compressed data is displayed again in the computer. The decompression process is the reverse of the previous compression process.
S14: and separating a plurality of hierarchical structures according to the storage format of the compressed data corresponding to the original image.
When the S14 file is stored, all image area blocks processed in S12 and S13, including the target image area and the non-target image area in S12, are packaged and expressed into corresponding image data blocks by a uniform and special data structure. When writing an image data block, in addition to writing image data (e.g., feature information of a target image area or compressed data of other image areas), it is necessary to mark the type of area image data (e.g., an area represented based on a spatial graphic manner or an area represented based on a conventional compressed data manner) in the image data block to provide a basis for reading during image decompression. The image data block may include: indication information indicating the position of image data within the block (for example, the position of shape information of the target image area, the position of position information of the target image area, the position of gradation information of the target image area, and the like), which is an important parameter for image display. These image data blocks are sequentially serialized to form the image compressed file format, which can be stored in a storage medium.
S15: from the hierarchical structure in the above step, it is determined whether the other image region is represented based on the conventional image compression method or the target image region is represented in a spatial graphic manner according to the exemplary embodiment of the present disclosure, in accordance with the region data structure.
S16: if the image is other image areas represented by the traditional image compression method, the data structure is taken out, and the processing mode is that the data of each pixel position are arranged one by one according to the traditional decompression method. The gradation represented by the data size is processed according to the color gamut components of the color map. In this way, a first image containing other image regions can be obtained.
S17: if the target image area is represented in the spatial graph mode according to the exemplary embodiment of the present disclosure, after the data structure is taken out, first, an initialization image with all gray scale values of 0 is generated, then, according to shape information (including curve equation information and curve posture information for representing the shape of the target image area) of the target image area and position information of the target image area in the feature data of the target image area, a graph (i.e., a contour shape) corresponding to the target image area is generated at the position of the target image area in the initialization image, and then, according to a gray scale value interval corresponding to the target image area, a filling process is performed on the graph (i.e., the contour shape) corresponding to the target image area in the initialization image, and gray scale values of all pixels in the graph corresponding to the target image area are replaced by a mean value (max + min)/2 between a maximum gray scale value max and a minimum gray scale value min of the interval corresponding to the target image area, so that a second image including the target image area can be obtained.
Here, the binary image corresponding to the original image includes: the plurality of sub-binary images, the target image region corresponding to the original image may include: and a plurality of to-be-processed areas corresponding to the plurality of sub-binary images, the second image may be an image formed by stitching the plurality of to-be-processed areas.
S18: the first image and the second image are combined and added to obtain an original image.
Since the first image and the second image are in the merging and adding process, the pixel of which the value is 0 does not affect other image layers. Therefore, all the images laminated can be regarded as original images, and the precision of the images is lost compared with the original images before compression, but in the fields such as medicine and industrial detection which do not have high requirements on image colors, a large amount of storage space and transmission space can be saved.
Fig. 6 is a schematic diagram of an application scenario of an image compression method and an image decompression method in an exemplary embodiment of the present disclosure. Application scenarios of the image compression method and the image decompression method provided by the exemplary embodiment of the present disclosure are described below with reference to fig. 6.
In an exemplary embodiment, as shown in fig. 6, the data acquisition module 601 is configured to acquire a raw image to be processed, for example, an X-ray detection image, acquired by the data acquisition terminal. A processing module 602, configured to obtain, based on an original image, compressed data corresponding to the original image by an image compression method in one or more embodiments described above; storing compressed data corresponding to the original image in a storage module 603; the storage module 603 is configured to store compressed data corresponding to the original image.
In an exemplary embodiment, as shown in fig. 6, the processing module 602 is configured to obtain compressed data corresponding to an original image from the storage module 603; by the image decompression method in one or more embodiments, the original image is obtained based on the compressed data corresponding to the original image.
The above description of the application scenario embodiment is similar to the above description of the method embodiment, and has similar beneficial effects as the method embodiment. For technical details that are not disclosed in the embodiments of the application scenario of the present disclosure, those skilled in the art should refer to the description in the embodiments of the method of the present disclosure for understanding, and therefore, the description is omitted here.
The embodiment of the disclosure also provides an image compression device. The image compression apparatus may include: a processor and a memory storing a computer program operable on the processor, wherein the processor when executing the computer program implements the steps of the image compression method in one or more of the exemplary embodiments described above.
In an exemplary embodiment, as shown in fig. 7, the image compression apparatus 70 may include: at least one processor 701; and at least one memory 702, bus 703 connected to processor 701; the processor 701 and the memory 702 complete communication with each other through a bus 703; the processor 701 is configured to call program instructions in the memory 702 to perform the steps of the image compression method in one or more embodiments described above.
The embodiment of the disclosure also provides an image decompression device. The image decompression apparatus may include: a processor and a memory storing a computer program operable on the processor, wherein the processor when executing the computer program implements the steps of the image decompression method in one or more of the exemplary embodiments described above.
In an exemplary embodiment, as shown in fig. 8, the image decompression apparatus 80 may include: at least one processor 801; and at least one memory 802, bus 803 connected to processor 801; the processor 801 and the memory 802 complete communication with each other through the bus 803; the processor 801 is configured to call program instructions in the memory 802 to perform the steps of the image decompression method in one or more of the embodiments described above.
In an exemplary embodiment, the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, application specific integrated circuit, and the like. The general-purpose Processor may be a Microprocessor (MPU) or the Processor may be any conventional Processor. Here, the embodiment of the present disclosure does not limit this.
In an exemplary embodiment, the Memory may include a volatile Memory in a computer-readable storage medium, a Random Access Memory (RAM), a non-volatile Memory (RAM), and/or the like, such as a Read Only Memory (ROM) or a Flash Memory (Flash RAM), and the Memory includes at least one Memory chip. Here, the embodiment of the present disclosure does not limit this.
In an exemplary embodiment, the bus may include a power bus, a control bus, a status signal bus, and the like, in addition to the data bus. For clarity of illustration, however, the various buses are labeled bus 703 in fig. 7 and bus 803 in fig. 8. Here, the embodiment of the present disclosure does not limit this.
In implementation, the processes performed by the image compression apparatus and the image decompression apparatus may be implemented by instructions in the form of hardware integrated logic circuits or software in the processor. That is, the method steps of the embodiments of the present disclosure may be implemented by a hardware processor, or implemented by a combination of hardware and software modules in a processor. The software module may be located in a storage medium such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory or an electrically erasable programmable memory, a register, etc. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
The embodiment of the present disclosure further provides a computer-readable storage medium, which includes a stored program, where when the program runs, a device where the storage medium is located is controlled to execute the steps of the image compression method in one or more embodiments described above.
The embodiment of the present disclosure further provides a computer-readable storage medium, which includes a stored program, where when the program runs, a device where the storage medium is located is controlled to execute the steps of the image decompression method in one or more embodiments described above.
In an exemplary embodiment, the computer readable storage medium may be: ROM/RAM, magnetic disk, optical disk, etc. Here, the embodiment of the present disclosure does not limit this.
The above description of the apparatus or computer-readable storage medium embodiments is similar to the description of the method embodiments above, with similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus or computer-readable storage medium of the present disclosure, reference is made to the description of the embodiments of the method of the present disclosure for understanding. And will not be described in detail herein.
The embodiment of the disclosure also provides a display device. The display device may include: the image compression apparatus in one or more of the above-described exemplary embodiments and the image decompression apparatus in one or more of the above-described exemplary embodiments.
In an exemplary embodiment, the display device may be: any product or component with a display function, such as a mobile phone, a tablet computer, a television, a display, a notebook computer or a navigator, etc. Here, the embodiment of the present disclosure does not limit the type of the display device. Other essential components of the display device are understood by those skilled in the art, and are not described herein nor should they be construed as limiting the present disclosure.
For technical details that are not disclosed in the embodiments of the display device of the present disclosure, those skilled in the art should understand the description in the embodiments of the optical display system of the present disclosure with reference to the drawings, and detailed descriptions thereof are omitted here.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as is well known to those skilled in the art.
Although the embodiments of the present disclosure have been described above, the above description is only for the purpose of understanding the present disclosure, and is not intended to limit the present disclosure. It will be understood by those skilled in the art of the present disclosure that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure, and that the scope of the disclosure is to be limited only by the terms of the appended claims.

Claims (14)

1. An image compression method comprising:
obtaining an original image to be processed; wherein the original image comprises: a target image area and other image areas than the target image area;
acquiring feature data of the target image area based on the original image, and acquiring compressed data of the other image areas; wherein the feature data of the target image region includes: shape information of the target image area, position information of the target image area, and gradation information of the target image area;
and generating and outputting compressed data corresponding to the original image based on the feature data of the target image area and the compressed data of the other image area.
2. The method of claim 1, wherein the obtaining feature data of the target image area based on the original image comprises:
carrying out binarization processing on the original image to obtain a binary image corresponding to the original image;
performing image segmentation processing on the binary image, determining a region to be processed in the binary image corresponding to the target image region, and acquiring feature data of the region to be processed in the binary image; wherein the feature data of the region to be processed in the binary image comprises: the shape information of the region to be processed in the binary image, the position information of the region to be processed in the binary image and the gray information of the region to be processed in the binary image;
and determining the characteristic data of the area to be processed as the characteristic data of the target image area.
3. The method according to claim 2, wherein the binarizing the original image to obtain a binary image corresponding to the original image comprises:
obtaining a gray value interval corresponding to the original image based on the gray value of the pixel of the original image;
dividing the gray value interval into a plurality of sub gray value intervals;
respectively carrying out binarization processing on the original image based on the multiple sub gray value intervals to obtain multiple sub binary images corresponding to the multiple sub gray value intervals one by one; wherein the binary image comprises: the plurality of sub-binary images.
4. The method according to claim 3, wherein the binarizing the original image based on the sub gray value intervals to obtain a plurality of sub binary images corresponding to the sub gray value intervals one by one comprises:
for each sub-gray value interval, performing the following operations on the original image:
traversing each pixel in the original image, and respectively comparing the gray value of each pixel with the maximum gray value and the minimum gray value of the sub-gray value interval;
and according to the comparison result, determining the binarization value of the pixel of which the gray value is located in the sub gray value interval as a preset first value for representing black, and determining the binarization value of the pixel of which the gray value is not located in the sub gray value interval as a preset second value for representing white, so as to obtain a sub-binary image corresponding to the sub gray value interval.
5. The method according to claim 3, wherein the performing image segmentation processing on the binary image and determining a region to be processed in the binary image corresponding to the target image region comprises:
for each sub-binary image the following operations are performed:
performing connected domain analysis on the sub-binary image to obtain a plurality of connected regions in the sub-binary image;
calculating the total number of pixels of a plurality of connected regions in the sub-binary image;
and determining the connected regions with the total pixel number larger than a preset threshold value in the plurality of connected regions in the sub-binary image as the regions to be processed in the sub-binary image corresponding to the target image region.
6. The method according to claim 3, wherein the obtaining feature data of the region to be processed in the binary image comprises:
determining a sub gray value interval corresponding to each sub two-value image as gray information of a region to be processed in each sub two-value image;
performing edge detection processing on the region to be processed in each sub two-value image to obtain the contour coordinates of the region to be processed in each sub two-value image;
determining the contour coordinates of the area to be processed in each sub two-value image as the position information of the area to be processed in each sub two-value image;
performing curve approximation processing on the region to be processed in each sub-binary image based on the contour coordinates of the region to be processed in each sub-binary image to obtain curve parameters for representing the shape of the region to be processed in each sub-binary image; wherein the curve parameters include: curve equation information and curve attitude information;
and determining curve parameters for representing the shape of the area to be processed in each sub two-value image as the shape information of the area to be processed in each sub two-value image.
7. The method according to claim 6, wherein performing curve approximation processing on the region to be processed in each sub-binary image based on the contour coordinates of the region to be processed in each sub-binary image to obtain a curve parameter representing the shape of the region to be processed in each sub-binary image comprises:
determining a fitting curve corresponding to the contour of the region to be processed in each sub-two-value image based on the contour coordinates of the region to be processed in each sub-two-value image;
converting the fitting curve corresponding to the outline of the region to be processed in each sub-binary image into a corresponding first space curved surface;
determining a minimum circumscribed circle corresponding to the contour of the region to be processed in each sub-two-value image based on the contour coordinates of the region to be processed in each sub-two-value image;
converting the minimum circumscribed circle corresponding to the outline of the region to be processed in each sub-binary image into a corresponding second space curved surface;
the first space curved surface and the second space curved surface corresponding to the outline of the area to be processed in each sub two-value image are intersected, and curved surface parameters are adjusted until an intersecting curve between the adjusted first space curved surface and the adjusted second space curved surface approaches the outline of the area to be processed in each sub two-value image, so that curve equation information and curve posture information corresponding to the intersecting curve are obtained;
and determining curve equation information and curve posture information corresponding to the intersecting curves as curve parameters for representing the shape of the region to be processed in each sub two-value image.
8. An image compression apparatus comprising: a processor and a memory storing a computer program operable on the processor, wherein the processor when executing the program performs the steps of the image compression method according to any of claims 1 to 7.
9. A computer-readable storage medium comprising a stored program, wherein the steps of the image compression method according to any one of claims 1 to 7 are controlled by a device on which the storage medium is located when the program is run.
10. An image decompression method comprising:
receiving compressed data corresponding to an original image; wherein, the compressed data corresponding to the original image comprises: the feature data of a target image area in the original image and the compressed data of other image areas except the target image area in the original image comprise: shape information of the target image area, position information of the target image area, and gray scale information of the target image area;
obtaining a first image containing the other image area based on the compressed data of the other image area;
obtaining a second image containing the target image area based on the characteristic data of the target image area;
and combining the first image and the second image to obtain the original image.
11. The method of claim 10, wherein obtaining the second image containing the target image region based on the feature data of the target image region comprises:
generating an initialization image, wherein the gray value of a pixel of the initialization image is a preset second value for representing white;
generating a shape of the target image region at a position of the target image region in the initialization image based on shape information of the target image region and position information of the target image region;
and filling pixels in the shape of the target image area based on the gray information of the target image area to obtain the second image.
12. The method of claim 11, wherein the grayscale information includes: a gray value interval; the filling processing of pixels in the shape of the target image area based on the gray scale information of the target image area to obtain the second image includes: calculating the average value of the maximum gray value and the minimum gray value of the gray value interval; and taking the average value as the gray value of the filled pixels in the shape of the target image area to obtain the second image.
13. An image decompression apparatus comprising: a processor and a memory storing a computer program operable on the processor, wherein the processor when executing the program performs the steps of the image decompression method according to any of claims 10 to 12.
14. A computer readable storage medium comprising a stored program, wherein the steps of the image decompression method according to any one of claims 10 to 12 are performed by a device on which the storage medium is controlled when the program is run.
CN202110859790.9A 2021-07-28 2021-07-28 Image compression method and device, image decompression method and device, and storage medium Pending CN115695821A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110859790.9A CN115695821A (en) 2021-07-28 2021-07-28 Image compression method and device, image decompression method and device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110859790.9A CN115695821A (en) 2021-07-28 2021-07-28 Image compression method and device, image decompression method and device, and storage medium

Publications (1)

Publication Number Publication Date
CN115695821A true CN115695821A (en) 2023-02-03

Family

ID=85058902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110859790.9A Pending CN115695821A (en) 2021-07-28 2021-07-28 Image compression method and device, image decompression method and device, and storage medium

Country Status (1)

Country Link
CN (1) CN115695821A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115858832A (en) * 2023-03-01 2023-03-28 天津市邱姆预应力钢绞线有限公司 Method and system for storing production data of steel strand
CN116074514A (en) * 2023-04-06 2023-05-05 深圳市银河通信科技有限公司 Secure communication method of multimedia data and cloud broadcasting system
CN116566748A (en) * 2023-07-11 2023-08-08 南通原力云信息技术有限公司 Small program data transmission encryption method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115858832A (en) * 2023-03-01 2023-03-28 天津市邱姆预应力钢绞线有限公司 Method and system for storing production data of steel strand
CN116074514A (en) * 2023-04-06 2023-05-05 深圳市银河通信科技有限公司 Secure communication method of multimedia data and cloud broadcasting system
CN116566748A (en) * 2023-07-11 2023-08-08 南通原力云信息技术有限公司 Small program data transmission encryption method
CN116566748B (en) * 2023-07-11 2023-09-12 南通原力云信息技术有限公司 Small program data transmission encryption method

Similar Documents

Publication Publication Date Title
CN115695821A (en) Image compression method and device, image decompression method and device, and storage medium
US10224955B2 (en) Data compression and decompression method of demura table, and mura compensation method
US9736455B2 (en) Method and apparatus for downscaling depth data for view plus depth data compression
US5867593A (en) Image region dividing apparatus
KR20210010551A (en) Point cloud mapping
EP0649260A2 (en) Method for expressing and restoring image data
CN105516540B (en) The compression method and device of bianry image
CN112017228A (en) Method for three-dimensional reconstruction of object and related equipment
US20230316580A1 (en) Image Data Decompression
US20110176739A1 (en) Pixel Block Processing
WO2022023002A1 (en) Methods and apparatus for encoding and decoding a 3d mesh as a volumetric content
CN114051734A (en) Method and device for decoding three-dimensional scene
CN113689373B (en) Image processing method, device, equipment and computer readable storage medium
US11445136B2 (en) Processing circuitry for processing data from sensor including abnormal pixels
CN110827309B (en) Super-pixel-based polaroid appearance defect segmentation method
CN111091188B (en) Forward computing method and device for neural network and computer readable storage medium
CN112419422A (en) Camera calibration method, device, equipment and storage medium
CN114862866B (en) Calibration plate detection method and device, computer equipment and storage medium
CN112183695B (en) Coding method, coding pattern reading method, and photographing apparatus
CN114067008A (en) Image processing method and device, electronic equipment and image display system
CN117561715A (en) Method and device for generating, data processing, encoding and decoding multi-plane image
CN111325658A (en) Color image self-adaptive decolorizing method
CN116612146B (en) Image processing method, device, electronic equipment and computer storage medium
CN113570518B (en) Image correction method, system, computer equipment and storage medium
CN116486090B (en) Lung cancer spine metastasis image processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination