CN113721859B - Image repeated data deleting method based on artificial intelligence - Google Patents

Image repeated data deleting method based on artificial intelligence Download PDF

Info

Publication number
CN113721859B
CN113721859B CN202111052365.5A CN202111052365A CN113721859B CN 113721859 B CN113721859 B CN 113721859B CN 202111052365 A CN202111052365 A CN 202111052365A CN 113721859 B CN113721859 B CN 113721859B
Authority
CN
China
Prior art keywords
image
blocks
gray
value
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111052365.5A
Other languages
Chinese (zh)
Other versions
CN113721859A (en
Inventor
陈明
楚杨阳
李玉华
程军强
桑永宣
张世征
张静静
彭伟伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Eurasia Hi Tech Digital Technology Co ltd
Original Assignee
Eurasia Hi Tech Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eurasia Hi Tech Digital Technology Co ltd filed Critical Eurasia Hi Tech Digital Technology Co ltd
Priority to CN202111052365.5A priority Critical patent/CN113721859B/en
Publication of CN113721859A publication Critical patent/CN113721859A/en
Application granted granted Critical
Publication of CN113721859B publication Critical patent/CN113721859B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0652Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention relates to the field of image compression, in particular to an image repeated data deleting method based on artificial intelligence; the method comprises the steps of obtaining image data, and performing simple processing on the image data to obtain a plurality of gray image blocks and marking values of pixel points in the gray image blocks; judging whether adjacent gray image blocks meet the same mark value distribution according to the mark values of the gray image blocks, and judging whether image stitching can be carried out; the gray image blocks meeting the conditions are spliced in sequence to obtain a similar area block and a dissimilar area block, then the similarity degree and the size between pixel points in the similar area block and the similarity degree and the size between pixel points in the dissimilar area block are respectively obtained, and then the convolution kernel size and the convolution times corresponding to the similar area block and the dissimilar area block are obtained; carrying out convolution processing on the region blocks to obtain corresponding image feature images; and finally, merging the image feature images to obtain compressed image data. Namely, the invention can accurately delete the repeated image data.

Description

Image repeated data deleting method based on artificial intelligence
Technical Field
The invention relates to the field of image compression, in particular to an image repeated data deleting method based on artificial intelligence.
Background
Deduplication is a widely used data reduction technique in data backup and archiving processes that reduces the storage capacity used in storage space by deleting duplicate data in a dataset to eliminate redundant data.
At present, methods for deleting repeated image data in industry are mainly divided into lossy deletion and lossless deletion. These methods are basically consistent with the idea of image compression, and all the methods reduce the data storage amount of the image by deleting redundant data of the image, thereby realizing the data compression of the image. The existing method for deleting the image redundant data mainly converts the image data into frequency domain data through discrete cosine transform, then carries out quantization and inverse quantization operations on the frequency domain data, and obtains the data after image compression through discrete cosine inverse change by an entropy encoder and an entropy decoder. However, such methods have the disadvantage that some non-redundant data are most likely to be deleted when deleting data, i.e. the corresponding data is not deleted exactly, resulting in a large loss of quality of the resulting image.
Therefore, a method capable of accurately deleting redundant data is needed.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide an image repeated data deleting method based on artificial intelligence, which adopts the following technical scheme:
acquiring image data, and sequentially carrying out graying treatment and equal block dividing operation on the image data to obtain a plurality of gray image blocks;
performing image processing on the gray image blocks to obtain a marking value of each pixel point in each gray image block;
judging whether the adjacent gray image blocks meet the same mark value distribution according to the mark value of each gray image block, judging that the adjacent gray image blocks are similar when the adjacent gray image blocks meet the same mark value distribution, performing image stitching to obtain new gray image blocks, and sequentially judging until all the similar gray image blocks are stitched to obtain similar region blocks; otherwise, judging that two adjacent gray image blocks are dissimilar, and not performing image stitching to finally obtain a dissimilar region block;
calculating the similarity degree between the pixel points in the similar area blocks according to the marking values of the pixel points in the similar area blocks; and taking the number of pixel points of the similar area block and the size of the similar area block as the sum; calculating the similarity degree between the pixel points in the dissimilar region blocks according to the marking values of the pixel points of the dissimilar region blocks; and taking the number of pixel points of the dissimilar region block and the size of the dissimilar region block as the sum; determining the convolution kernel size and the convolution times of the corresponding region blocks according to the size and the similarity degree of the similar region blocks and the size and the similarity degree of the dissimilar region blocks respectively;
according to the convolution kernel sizes and convolution times of different area blocks, respectively carrying out convolution processing on corresponding similar area blocks and dissimilar area blocks to obtain corresponding image feature images;
and merging the obtained image feature images to obtain compressed image data.
Further, the specific steps of the image processing are as follows:
1) Selecting any gray image block, and sequentially differencing the gray value of a first pixel point in a first row in the gray image block with a second pixel point, a third pixel point and a fourth pixel point behind the pixel point to obtain a corresponding gray difference value;
2) Calculating a difference average value of the gray difference values, comparing the difference average value with an average value threshold value, if the difference average value is smaller than or equal to the average value threshold value, marking values of all pixel points involved in calculation as 0, and if the difference average value is larger than the average value threshold value, marking values of the first pixel point asWherein (1)>Is the mean value of the difference value, M 1 Is the mean threshold;
3) And analogically, according to the steps 1) and 2), obtaining a mark value of the j-th pixel point; and further obtaining the marking value of each pixel point of each gray image block.
Further, the judging method for judging whether the adjacent gray image blocks meet the same mark value distribution comprises the following steps:
the method comprises the steps of performing three-dimensional space mapping on marking values of two adjacent gray image blocks to obtain corresponding three-dimensional space distribution planes, overlapping the two three-dimensional space distribution planes, calculating Euclidean distances between pixel points in the gray image blocks after overlapping the two three-dimensional space distribution planes, obtaining coupling degree according to the average value of the Euclidean distances between the pixel points of the gray image blocks, and enabling the two gray images to meet the same marking value distribution when the coupling degree is smaller than a set coupling threshold value; the size of the gray image block is the size of the bottom surface of the three-dimensional space, and the marking value is the three-dimensional space height of the pixel point.
Further, after the same mark value distribution condition is met, calculating gray difference values of edge pixel points of adjacent gray image blocks, and when the gray difference values are smaller than a set threshold value, performing image stitching to obtain similar area blocks; and when the gray level difference value is larger than a set threshold value, not performing image stitching, and finally obtaining a dissimilar region block.
Further, the similarity degree is:
wherein, in the similar region block, i is any pixel point in the similar region block, K i Is the pixel i marker value and n is the total number of pixels in the similar region block.
Further, the convolution kernel size is:
wherein L represents the ratio of the side length of the similar region block or the dissimilar region block to the side length of the smallest region block,representing a rounding down.
Further, the convolution number isWherein (1)>Representing a round up->To a similar extent.
Further, the method for combining the image feature images comprises the following steps: and according to the adjacent relation of the gray image blocks, performing 0 supplementing operation on the missing part of each image characteristic diagram during convolution operation, and then directly splicing the missing part with the adjacent image characteristic diagram to obtain compressed image data.
The beneficial effects of the invention are as follows:
the method obtains the similar area blocks and the dissimilar area blocks by obtaining the marking value of each pixel point in the image, better reserves the image characteristics of the dissimilar area in the image, and improves the image quality after deleting the redundant data of the image.
According to the invention, the similar area and the dissimilar area are convolved respectively according to different convolution layers to obtain corresponding compressed images, so that the effect of deleting the image repeated data is achieved; the data reserved after the image data is deleted still has a relatively complete original image structure and image information, and the image quality loss is ensured to be as small as possible when the reserved data is subjected to data recovery.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an embodiment of an artificial intelligence based image deduplication method according to the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description of the specific implementation, structure, characteristics and effects according to the invention is given with reference to the accompanying drawings and the preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the image repeating data deleting method based on artificial intelligence provided by the invention with reference to the accompanying drawings; the invention compresses the image data which is needed to be stored arbitrarily, so that the occupied memory is reduced when the image data is stored.
Specifically, the invention takes face image data as an example to carry out the explanation of a specific scheme:
referring to fig. 1, a flowchart of steps of an image deduplication method based on artificial intelligence according to an embodiment of the present invention is shown, the method includes the steps of:
step 1, obtaining image data, and sequentially carrying out graying treatment and equal block dividing operation on the image data to obtain a plurality of gray image blocks.
Specifically, in this embodiment, the camera is used to obtain image data of the face image, and the size of the image data is 240×240.
Specifically, in this embodiment, the image is subjected to gray scale processing by using a weighted gray scale method, where the gray scale weights of the three channels R, G, B are 0.3, 0.6 and 0.1 respectively. According to the gray image block size obtained by the graying treatment, the size is 8 x 8, and 900 gray image blocks are obtained in total; in order to better improve the image quality after the repeated data deletion, the embodiment also supports selecting larger gray image blocks for blocking, such as 16×16; the image is partitioned because the image pixel points are mutually independent, so that the processing is beneficial to the parallel operation of the neural network, the processing efficiency of the system can be improved, and the time complexity is reduced.
And 2, performing image processing on the plurality of gray image blocks to obtain the marking value of each pixel point in each gray image block.
In this embodiment, the specific steps of the image processing are:
1) Selecting any gray image block, and sequentially differencing the gray value of a first pixel point in a first row in the selected gray image block with a second pixel point, a third pixel point and a fourth pixel point behind the pixel point to obtain corresponding gray difference values d0, d1 and d2;
wherein the first pixel point of the first row in the above step generally starts from the left side of the gray image block.
2) Calculating the difference average value of the gray level difference values d0, d1 and d2, comparing the difference average value with an average value threshold, and if the difference average value is smaller than or equal to the average value threshold, marking all pixel points participating in calculation as 0; if the average value of the difference values is greater than the average value threshold value, the marking value of the first pixel point isWherein (1)>Is the mean value of the difference value, M 1 Is the mean threshold;
wherein the average value threshold M in the above step 1 =±10。
And analogically, according to the steps 1) and 2), obtaining a mark value of the j-th pixel point; and then obtainA mark value to each pixel point of each gray image block. It should be noted that, when the marking value of the first pixel and the second, third and fourth pixels after the pixel is 0, the gray difference between the second pixel and the third, fourth and fifth pixels after the pixel is calculated to be greater than the average threshold valueAs the marking value of the fifth pixel point, the marking values of the rest pixel points are unchanged; taking the fifth pixel point as a gray jump point, and when the gray difference value between the sixth pixel point and the seventh and eighth pixel points after the sixth pixel point is calculated, taking the gray jump point as a first pixel point to calculate; and when the gray difference value of the seventh pixel point is calculated, calculating the sixth pixel point as the first pixel point, and the like until a pixel point area with the gray difference value smaller than or equal to a difference value threshold value is obtained.
Step 3, judging whether the adjacent gray image blocks meet the same mark value distribution according to the mark value of each gray image block, judging that the adjacent gray image blocks are similar when the adjacent gray image blocks meet the same mark value distribution, and performing image stitching to obtain new gray image blocks, and sequentially judging until all the similar gray image blocks are stitched to obtain similar region blocks; otherwise, judging that the two adjacent gray image blocks are dissimilar, and not performing image stitching to finally obtain a dissimilar region block.
Specifically, the judging method for judging whether the adjacent gray image blocks meet the same mark value distribution is as follows:
the method comprises the steps of performing three-dimensional space mapping on marking values of two adjacent gray image blocks to obtain corresponding three-dimensional space distribution planes, overlapping the two three-dimensional space distribution planes, calculating Euclidean distances between pixel points in the gray image blocks after overlapping the two three-dimensional space distribution planes, obtaining coupling degree according to the average value of the Euclidean distances between the pixel points of the gray image blocks, and enabling the two gray images to meet the same marking value distribution when the coupling degree is smaller than a set coupling threshold value;
the size of the gray image block is the size of the bottom surface of the three-dimensional space, and the marking value is the three-dimensional space height of the pixel point;
the closer the coupling degree is to 0, the more the two gradation images are considered to satisfy the same distribution of the mark values.
Specifically, three-dimensional space connection is carried out on the mapping values of the three-dimensional space to obtain a three-dimensional space distribution plane.
If the three-dimensional spatial distribution planes of the two gray images are not equal in size, the larger gray image needs to be scaled to obtain a three-dimensional spatial distribution plane equal in size to the other gray image; wherein. The scaling processing method is to scale the image to be equal in size by adopting a method of deleting similar points of the interval pixel points, and then judging the coupling degree of two three-dimensional space distribution planes.
Further, in order to more accurately judge similar image blocks, the method further comprises the steps of calculating gray difference values of edge pixel points of adjacent gray image blocks after the same mark value distribution condition is met, and performing image stitching when the gray difference values are smaller than a set threshold value to obtain similar region blocks; and when the gray level difference value is larger than a set threshold value, not performing image stitching, and finally obtaining a dissimilar region block.
Step 4, calculating the similarity degree between the pixel points in the similar area blocks according to the marking values of the pixel points in the similar area blocks, and taking the number of the pixel points of the similar area blocks and the size of the similar area blocks as the size of the similar area blocks; calculating the similarity degree between the pixel points in the dissimilar region blocks according to the marking values of the pixel points of the dissimilar region blocks; and taking the number of pixel points of the dissimilar region block and the size of the dissimilar region block as the sum; and determining the convolution kernel size and the convolution times of the corresponding region blocks according to the size and the similarity degree of the similar region blocks and the size and the similarity degree of the dissimilar region blocks respectively.
Specifically, the degree of similarity of the similar region blocks in this embodiment is
Wherein, in the similar region block, i is any pixel point in the similar region block, K i Is the pixel i marker value and n is the total number of pixels in the similar region block.
In this embodiment, the calculation of the similarity degree of the dissimilar region blocks is the same as the calculation of the similar region blocks described above, and will not be repeated here.
The convolution kernel in this embodiment has a size ofWherein L is the ratio of the side length of the similar area block or the dissimilar area block to the side length of the smallest area block, < ->The representation is rounded down, ensuring that the convolution kernel is an integer; the minimum area block in this embodiment is an 8×8 area block, and the convolution kernel size corresponding to the default minimum area block is 3*3; then, the convolution kernel size quantization is carried out according to the ratio L of the side length of each image block area to 8, so as to obtain the corresponding convolution kernel size of +.>
Specifically, in this embodiment, corresponding convolution times are calculated according to the similarity degree of different area blocks; in order to avoid excessive convolution of the network and increase difficulty in subsequent image data recovery, the maximum convolution frequency is set to be 20 times in the embodiment, and the convolution frequency is set to beWherein (1)>To a similar extent->Performing 1 convolution on the region block of (a); />The method is characterized in that the method is used for expressing the upward rounding, and the purpose of the upward rounding is to enable the convolution times to be the smallest integer, enable the gray level image to be convolved as little as possible, and enable the original image data characteristics to be reserved more.
In this embodiment, since the acquired image sizes of the similar region block and the dissimilar region block are different, different convolution kernels need to be set for the different image sizes; it should be noted that, the larger the convolution kernel, the coarser the corresponding extracted image features, and the more the original information of the image is retained by the obtained feature map.
And step 5, respectively carrying out convolution processing on the corresponding similar area blocks and dissimilar area blocks according to the convolution kernel sizes and the convolution times of the different area blocks to obtain corresponding image feature images.
In this embodiment, as the parameters of the obtained feature map are smaller and smaller in the convolution process of the region block, the corresponding feature map is smaller and smaller; in order to ensure that the size of the image after the redundant data deletion is unchanged, a method of performing 0 supplementing operation on the part lacking the regional block convolution is adopted, so that the size of the image characteristic diagram always keeps the size of the original convolution; in this embodiment, the operation of supplementing 0 to the missing part of the image is actually equivalent to deleting redundant data of pixels in the similar region of the image, so that the deletion of redundant data of the image data is achieved, that is, the purpose of compressing the image data is achieved.
And 6, merging the obtained image feature images to obtain compressed image data.
Specifically, the method for merging the image feature images comprises the following steps: and according to the adjacent relation of the gray image blocks, performing 0 supplementing operation on the missing part of each image characteristic diagram during convolution operation, and then directly splicing the missing part with the adjacent image characteristic diagram to obtain compressed image data.
Further, the invention also comprises a step of recovering the compressed image data.
Specifically, since the image convolution process itself is irreversible, the image feature map is restored by using the self-coding network in the present embodiment; the self-coding network comprises a convolution network part and a self-coding network part, and the overall structure of the network is as follows: an Encoder-Decoder; wherein the Encoder Encoder is a network convolution portion and the Decode Decoder is a self-encoded image data recovery portion.
It should be noted that, the training process of the self-coding network and the network convolution process in the embodiment are synchronous training, so as to ensure that the self-coding network parameters and the network convolution parameters can be synchronously learned, and ensure that the reconstructed image retains the image quality as much as possible; this section is not the focus of the present invention, and is not repeated in detail, but can be understood as using the existing self-coding network training method to complete the decoder training of the self-coding network.
The invention adopts the self-coding network to make up the problem of irreversible convolution, and the self-coding network can reduce the image quality loss as much as possible in the process of completing the recovery of the image data.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (4)

1. An image data repeating deleting method based on artificial intelligence, which is characterized by comprising the following steps:
acquiring image data, and sequentially carrying out graying treatment and equal block dividing operation on the image data to obtain a plurality of gray image blocks;
performing image processing on the gray image blocks to obtain a marking value of each pixel point in each gray image block;
judging whether the adjacent gray image blocks meet the same mark value distribution according to the mark value of each gray image block, judging that the adjacent gray image blocks are similar when the adjacent gray image blocks meet the same mark value distribution, performing image stitching to obtain new gray image blocks, and sequentially judging until all the similar gray image blocks are stitched to obtain similar region blocks; otherwise, judging that two adjacent gray image blocks are dissimilar, and not performing image stitching to finally obtain a dissimilar region block;
calculating the similarity degree between the pixel points in the similar area blocks according to the marking values of the pixel points in the similar area blocks; and taking the number of pixel points of the similar area block and the size of the similar area block as the sum; calculating the similarity degree between the pixel points in the dissimilar region blocks according to the marking values of the pixel points of the dissimilar region blocks; and taking the number of pixel points of the dissimilar region block and the size of the dissimilar region block as the sum; determining the convolution kernel size and the convolution times of the corresponding region blocks according to the size and the similarity degree of the similar region blocks and the size and the similarity degree of the dissimilar region blocks respectively;
according to the convolution kernel sizes and convolution times of different area blocks, respectively carrying out convolution processing on corresponding similar area blocks and dissimilar area blocks to obtain corresponding image feature images;
combining the obtained image feature images to obtain compressed image data;
the specific steps of the image processing are as follows:
1) Selecting any gray image block, and sequentially differencing the gray value of a first pixel point in a first row in the gray image block with a second pixel point, a third pixel point and a fourth pixel point behind the pixel point to obtain a corresponding gray difference value;
2) Calculating a difference value average value of the gray difference values, comparing the difference value average value with an average value threshold value, and if the difference value average value is smaller than or equal to the average value threshold value, marking values of all pixels participating in calculation as 0; if the average value of the difference values is greater than the average value threshold value, the marking value of the first pixel point isWherein (1)>Is the mean value of the difference value, M 1 Is the mean threshold;
3) And analogically, according to the steps 1) and 2), obtaining a mark value of the j-th pixel point; further obtaining the marking value of each pixel point of each gray image block;
the similarity degree is as follows:
wherein, in the similar region block, i is any pixel point in the similar region block, K i Is the index value of the pixel points i, and n is the total number of the pixel points in the similar area block;
the convolution kernel size is:
wherein L represents the ratio of the side length of the similar region block or the dissimilar region block to the side length of the smallest region block,representing a downward rounding;
the convolution times areWherein (1)>Representing a round up->To a similar extent.
2. The image repeating data deleting method based on artificial intelligence according to claim 1, wherein the judging method of whether the adjacent gray image blocks meet the same mark value distribution is as follows:
the method comprises the steps of performing three-dimensional space mapping on marking values of two adjacent gray image blocks to obtain corresponding three-dimensional space distribution planes, overlapping the two three-dimensional space distribution planes, calculating Euclidean distances between pixel points in the gray image blocks after overlapping the two three-dimensional space distribution planes, obtaining coupling degree according to the average value of the Euclidean distances between the pixel points of the gray image blocks, and enabling the two gray images to meet the same marking value distribution when the coupling degree is smaller than a set coupling threshold value; the size of the gray image block is the size of the bottom surface of the three-dimensional space, and the marking value is the three-dimensional space height of the pixel point.
3. The method for deleting image repeated data based on artificial intelligence according to claim 1, wherein after the same mark value distribution condition is satisfied, the method further comprises calculating gray difference values of edge pixel points of adjacent gray image blocks, and when the gray difference values are smaller than a set threshold value, performing image stitching to obtain similar region blocks; and when the gray level difference value is larger than a set threshold value, not performing image stitching, and finally obtaining a dissimilar region block.
4. The method for deleting image data repeatedly based on artificial intelligence according to claim 1, wherein the method for merging the image feature images is as follows: and according to the adjacent relation of the gray image blocks, performing 0 supplementing operation on the missing part of each image characteristic diagram during convolution operation, and then directly splicing the missing part with the adjacent image characteristic diagram to obtain compressed image data.
CN202111052365.5A 2021-09-08 2021-09-08 Image repeated data deleting method based on artificial intelligence Active CN113721859B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111052365.5A CN113721859B (en) 2021-09-08 2021-09-08 Image repeated data deleting method based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111052365.5A CN113721859B (en) 2021-09-08 2021-09-08 Image repeated data deleting method based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN113721859A CN113721859A (en) 2021-11-30
CN113721859B true CN113721859B (en) 2023-07-21

Family

ID=78682710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111052365.5A Active CN113721859B (en) 2021-09-08 2021-09-08 Image repeated data deleting method based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN113721859B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116719483B (en) * 2023-08-09 2023-10-27 成都泛联智存科技有限公司 Data deduplication method, apparatus, storage device and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102693522A (en) * 2012-04-28 2012-09-26 中国矿业大学 Method for detecting region duplication and forgery of color image
CN109427047A (en) * 2017-08-28 2019-03-05 京东方科技集团股份有限公司 A kind of image processing method and device
CN110517329A (en) * 2019-08-12 2019-11-29 北京邮电大学 A kind of deep learning method for compressing image based on semantic analysis
CN110580704A (en) * 2019-07-24 2019-12-17 中国科学院计算技术研究所 ET cell image automatic segmentation method and system based on convolutional neural network
CN110766095A (en) * 2019-11-01 2020-02-07 易思维(杭州)科技有限公司 Defect detection method based on image gray level features
CN111125416A (en) * 2019-12-27 2020-05-08 郑州轻工业大学 Image retrieval method based on multi-feature fusion

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102693522A (en) * 2012-04-28 2012-09-26 中国矿业大学 Method for detecting region duplication and forgery of color image
CN109427047A (en) * 2017-08-28 2019-03-05 京东方科技集团股份有限公司 A kind of image processing method and device
CN110580704A (en) * 2019-07-24 2019-12-17 中国科学院计算技术研究所 ET cell image automatic segmentation method and system based on convolutional neural network
CN110517329A (en) * 2019-08-12 2019-11-29 北京邮电大学 A kind of deep learning method for compressing image based on semantic analysis
CN110766095A (en) * 2019-11-01 2020-02-07 易思维(杭州)科技有限公司 Defect detection method based on image gray level features
CN111125416A (en) * 2019-12-27 2020-05-08 郑州轻工业大学 Image retrieval method based on multi-feature fusion

Also Published As

Publication number Publication date
CN113721859A (en) 2021-11-30

Similar Documents

Publication Publication Date Title
US10547852B2 (en) Shape-adaptive model-based codec for lossy and lossless compression of images
Guarda et al. Point cloud coding: Adopting a deep learning-based approach
KR0154739B1 (en) Fractal image compression device and method
CN114693816B (en) Intelligent image big data storage method
CN110933438B (en) JPEG image reversible information hiding method
CN115578476B (en) Efficient storage method for urban and rural planning data
CN113721859B (en) Image repeated data deleting method based on artificial intelligence
CN114531599B (en) Image compression method for medical image storage
CN107371026B (en) video data multiple compression and reconstruction method
CN113256657B (en) Efficient medical image segmentation method and system, terminal and medium
CN113947538A (en) Multi-scale efficient convolution self-attention single image rain removing method
CN115102934B (en) Decoding method, encoding device, decoding equipment and storage medium for point cloud data
US10477225B2 (en) Method of adaptive structure-driven compression for image transmission over ultra-low bandwidth data links
Ruivo et al. Double-deep learning-based point cloud geometry coding with adaptive super-resolution
KR20140099986A (en) Colorization-based color image coding method by meanshift segmentation algorithm
WO2023050381A1 (en) Image and video coding using multi-sensor collaboration
WO2023178662A1 (en) Image and video coding using multi-sensor collaboration and frequency adaptive processing
CN116156205A (en) Optimized storage transmission method based on image classification data
WO2024007144A1 (en) Encoding method, decoding method, code stream, encoders, decoders and storage medium
Gupta et al. Comparative study for Image Forgery Analysis between JPEG and Double JPEG Compression
CN116152363A (en) Point cloud compression method and point cloud compression device based on depth map
Lin et al. Sparse Tensor-based point cloud attribute compression using Augmented Normalizing Flows
CN113780340A (en) Compressed image identification method based on deep learning
CN117391986A (en) Point cloud geometrical attribute compression post-processing optimization device and method based on neural network
CN117409233A (en) Pathological image classification method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230629

Address after: No. 173, 17th Floor, No. 105 Zijingshan South Road, Guancheng District, Zhengzhou City, Henan Province, 450002

Applicant after: Eurasia hi tech digital technology Co.,Ltd.

Address before: No.136, science Avenue, high tech Zone, Zhengzhou City, Henan Province, 450000

Applicant before: Zhengzhou University of light industry

GR01 Patent grant
GR01 Patent grant