CN113724157A - Image blocking method, image processing method, electronic device, and storage medium - Google Patents

Image blocking method, image processing method, electronic device, and storage medium Download PDF

Info

Publication number
CN113724157A
CN113724157A CN202110921527.8A CN202110921527A CN113724157A CN 113724157 A CN113724157 A CN 113724157A CN 202110921527 A CN202110921527 A CN 202110921527A CN 113724157 A CN113724157 A CN 113724157A
Authority
CN
China
Prior art keywords
target image
preset
image
image blocks
pixel value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110921527.8A
Other languages
Chinese (zh)
Inventor
梁涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110921527.8A priority Critical patent/CN113724157A/en
Publication of CN113724157A publication Critical patent/CN113724157A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20052Discrete cosine transform [DCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application relates to the technical field of videos and discloses an image blocking method, an image processing method, electronic equipment and a storage medium, wherein the image blocking method comprises the following steps: acquiring a target image; sliding on the target image according to a preset step length by using a window with a preset size so as to divide the target image into a plurality of target image blocks; wherein the preset step length is smaller than the height and/or width in the preset dimension. Based on the mode, the blocking effect of the reconstructed image is favorably reduced.

Description

Image blocking method, image processing method, electronic device, and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image blocking method, an image processing method, an electronic device, and a storage medium.
Background
In the prior art, due to the limitation of software and hardware conditions, an image with too many pixels or an image with too large size cannot be processed at one time, so that the image needs to be partitioned before being processed, then image processing is respectively performed on image blocks obtained through partitioning, and finally the processed image blocks are reconstructed into a complete image to obtain the image after image processing.
The existing block partitioning method has the defect that after the obtained image blocks are subjected to image reconstruction, the boundary of each image block in the reconstructed image has an obvious discontinuous phenomenon, namely the blocking effect of the reconstructed image is serious.
Disclosure of Invention
The technical problem mainly solved by the application is how to reduce the blocking effect of the reconstructed image.
In order to solve the above technical problem, the first technical solution adopted by the present application is: an image blocking method, comprising: acquiring a target image; sliding on the target image according to a preset step length by using a window with a preset size so as to divide the target image into a plurality of target image blocks; wherein the preset step length is smaller than the height and/or width in the preset dimension.
In order to solve the above technical problem, the second technical solution adopted by the present application is: an image processing method comprising: acquiring a plurality of target image blocks by using the image blocking method; carrying out noise reduction processing on each target image block; acquiring an overlapping area and a non-overlapping area of each target image block; calculating a first weighted pixel value of the overlapping area according to a first preset weight value; calculating a second weighted pixel value of the non-overlapping area according to a second preset weight value; the first preset weight is determined by the number of target image blocks corresponding to the overlapping area; and splicing all the weighted target image blocks according to the target image to obtain a reconstructed image.
In order to solve the above technical problem, a third technical solution adopted by the present application is: an electronic device, comprising: a memory and a processor; the memory is used for storing program instructions, and the processor is used for executing the program instructions to realize the image blocking method or the image processing method.
In order to solve the above technical problem, a fourth technical solution adopted by the present application is: a computer-readable storage medium storing program instructions which, when executed by a processor, implement the above-described image blocking method or the above-described image processing method.
Different from the prior art, the target image is divided into a plurality of target image blocks by sliding the window which is based on the preset step length smaller than the height and/or width of the preset size and has the size of the preset size on the target image, so that the subsequent image reconstructed based on the plurality of target image blocks is smoother, and the blocking effect of the reconstructed image is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flowchart of an embodiment of an image segmentation method according to the present application;
fig. 2 is a schematic diagram of blocking based on the image blocking method of the present application:
FIG. 3 is a flowchart illustrating an embodiment of an image processing method according to the present application;
FIG. 4 is a flowchart illustrating an embodiment of step S22 in FIG. 3;
FIG. 5 is a block diagram of an embodiment of a digital circuit for decomposing a formula of a one-dimensional DCT transform into 4 matrix vectors;
FIG. 6 is a block diagram of an embodiment of a digital circuit with a formula decomposition of one-dimensional inverse DCT transform into 4 matrix vectors;
FIG. 7 is a schematic diagram of a frequency domain distribution of an embodiment of a frequency domain image block in a two-dimensional Gaussian curve distribution according to the present application;
FIG. 8 is a schematic view of the region label of FIG. 2;
FIG. 9 is a schematic structural diagram of an embodiment of an electronic device of the present application;
FIG. 10 is a schematic structural diagram of an embodiment of a computer-readable storage medium according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first" and "second" in this application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Fig. 1 is a flowchart illustrating an embodiment of an image blocking method according to the present application. Fig. 2 is a schematic diagram of blocking based on the image blocking method of the present application.
As shown in fig. 1, in a first embodiment, the present application provides an image blocking method, including:
step S11: and acquiring a target image.
The target image can be obtained from a target video stream so as to perform image blocking on each image in the target video stream respectively in the following process, and a target image to be processed independently can be obtained so as to perform image blocking on the target image so as to perform the following image processing. The source of the target image is not limited.
Step S12: and sliding the target image on a preset size window according to a preset step length to divide the target image into a plurality of target image blocks.
Wherein the preset step length is smaller than the height and/or width in the preset dimension.
Optionally, the preset step length includes a preset horizontal step length and a preset vertical step length.
Step S12 may specifically include: and sliding a window with a preset size on the target image in the horizontal direction according to a preset horizontal step length and/or sliding the window with the preset vertical step length in the vertical direction to divide the target image into a plurality of target image blocks, wherein the preset horizontal step length is smaller than the width of the preset size, and/or the preset vertical step length is smaller than the height of the preset size.
For example, as shown in fig. 2, when the preset size is 8 × 8 row-column size, and the horizontal preset step and the vertical preset step are both 6, a 26 × 26 target image may be divided into 20 image blocks with the size of 8 × 8, where, taking the image block at the top left corner as an example, part a of the image block is a part that is not overlapped with any image block, part B is overlapped with the right image block, part C is overlapped with the lower image block, and part D is overlapped with the right image block, the lower image block, and the right lower image block, respectively.
When the target image is subjected to image segmentation, the image segmentation may not be evenly divisible, for example, when the target image is 24 × 26, the image in the bottom row may only be segmented into 6 × 8 image blocks. In order to ensure that all the image blocks divided by the image blocks are 8 × 8 image blocks, the target image needs to be supplemented, in the case that the target image is 24 × 26, two rows of pixel points can be supplemented for the target image, the two rows of pixel points can be supplemented by taking the lower boundary of the original target image as a mirror image boundary, or the target image can be supplemented by taking other modes as an image of 26 × 26, which is not limited here.
Specifically, the image blocking method may be implemented in a Field Programmable Gate Array (FPGA) manner, and specifically includes the following steps:
on the premise that the preset size is 8 × 8 and the horizontal preset step length and the vertical preset step length are both 6, a block RAM (Random Access Memory) is adopted for caching the target image, the block RAM has the capacity of caching 14 rows of data, then, each column of 8 × 1 images in the target image are sequentially cached, and based on the setting, when the pixels in one column of 8 × 1 image block are cached, one column of pixels in the previous cached image block can be output for correlation calculation. After one column of the target image is cached, the next column of the target image can be cached based on the above manner until the caching of each column of the target image is completed.
Different from the prior art, the target image is divided into a plurality of target image blocks by sliding the window which is based on the preset step length smaller than the height and/or width of the preset size and has the size of the preset size on the target image, so that the subsequent image reconstructed based on the plurality of target image blocks is smoother, and the blocking effect of the reconstructed image is reduced.
Fig. 3 is a flowchart illustrating an embodiment of an image processing method according to the present application.
As shown in fig. 3, in a second embodiment, the present application further proposes an image processing method, including:
step S21: and acquiring a plurality of target image blocks by using an image blocking method.
The image blocking method described in the first embodiment is used to perform image blocking processing on the target image to obtain a plurality of target image blocks.
Step S22: and carrying out noise reduction processing on each target image block.
The target image blocks are subjected to noise reduction processing, and the target image may be subjected to edge detail enhancement processing and other processing, which are not limited herein.
Fig. 4 is a flowchart illustrating an embodiment of step S22 in fig. 3.
Optionally, as shown in fig. 5, step S22 may specifically include:
step S221: and performing frequency domain conversion on all the target image blocks to obtain each frequency domain image block.
Step S222: and performing frequency domain noise reduction processing on each frequency domain image block.
Step S223: and performing inverse frequency domain conversion on all frequency domain image blocks subjected to the noise reduction processing to obtain each target image block subjected to the noise reduction processing.
The target image block can be converted into a frequency domain image block firstly when the target image block is subjected to noise reduction processing, then the high-frequency part in the frequency domain image block is directly processed, and the frequency domain image block is subjected to inverse frequency domain conversion after the processing is finished so as to obtain each noise-reduced target image block, so that the noise reduction effect is achieved.
Further, step S221 may specifically include: performing two times of one-dimensional DCT (Discrete Cosine Transform) transformation on all target image blocks to obtain each frequency domain image block;
step S223 may specifically include:
and performing one-dimensional DCT inverse transformation twice on all frequency domain image blocks subjected to the noise reduction processing to obtain each target image block subjected to the noise reduction processing.
The step of performing two times of one-dimensional DCT transformation on a target image block may specifically include: firstly, one-dimensional DCT transformation is carried out on a target image block to obtain a one-dimensional DCT image matrix, then row-column transposition is carried out on the one-dimensional DCT image matrix, and finally, one-dimensional DCT transformation is carried out on the one-dimensional DCT image matrix after the transposition to obtain a frequency domain image block.
The step of performing two times of one-dimensional DCT inverse transformations on a frequency domain image block may specifically include: firstly, performing one-dimensional DCT inverse transformation on the frequency domain image blocks to obtain a one-dimensional DCT image matrix, then performing row-column transposition on the one-dimensional DCT image matrix, and finally performing one-dimensional DCT inverse transformation on the reversed one-dimensional DCT image matrix to obtain the target image blocks.
Specifically, fig. 5 is a schematic structural diagram of an embodiment of a digital circuit in which a formula of one-dimensional DCT transform is decomposed into 4 matrix vectors. FIG. 6 is a block diagram of an embodiment of a digital circuit for decomposing a formula of one-dimensional inverse DCT into 4 matrix vectors.
Two times of one-dimensional DCT transformation and two times of one-dimensional DCT inverse transformation can be realized by adopting an FPGA mode, and the method specifically comprises the following steps:
two times of one-dimensional DCT transformation:
firstly, a column of 8 x 1 pixel point matrixes of a cached target image block can be read from an RAM, and one-dimensional DCT transformation is carried out based on the following formula:
Figure BDA0003207600290000061
Figure BDA0003207600290000062
in the formula (1), z0To z7Is a column of elements, x, of a one-dimensional DCT matrix obtained after one-dimensional DCT conversion0To x7The elements of a column 8 x 1 pixel matrix of a cached target image block are read from the RAM.
Can convert x into0To x7Converts from floating point number to fixed point number for easy calculation, and decomposes equation (1) into 2 or 4 matrix vectors as follows:
decomposed into 2 matrix vectors:
Figure BDA0003207600290000071
Figure BDA0003207600290000072
decomposed into 4 matrix vectors:
Figure BDA0003207600290000073
Figure BDA0003207600290000074
Figure BDA0003207600290000075
Figure BDA0003207600290000076
taking decomposition into 4 matrix vectors as an example, as shown in fig. 5, a digital circuit for calculating the above equations (5) to (8) can be constructed by using 8 multipliers and 14 adders, so as to facilitate the construction of the corresponding circuit based on FPGA. In addition, if the matrix vector is decomposed into 2 matrix vectors, a digital circuit for calculating the above equations (3) and (4) needs to be constructed by using 11 multipliers and 23 adders, and the calculation time is longer.
After a DCT transformation, z is obtained0To z7Then, z can be obtained based on the digital circuit shown in FIG. 5 based on the dual port RAMiInputting the RAM with the depth of 16 one by one to form a row of data until the DCT conversion of all the columns of the target image block and each z are completediTo complete the row-column transposition in the input process. The RAM with the depth of 16 can simultaneously buffer information of two 8 × 8 image blocks, that is, ping-pong processing can be realized, that is, while buffering information of one 8 × 8 image block, information of one 8 × 8 image block is output to perform subsequent second one-dimensional DCT transformation, thereby reducing the calculation time.
Then transpose the completed product by ziThe matrix formed is regarded as xiThe matrix is formed, and each column element of the matrix is respectively brought into the formula (1) to obtain a new matrix formed by ziAnd forming a matrix to complete the second one-dimensional DCT. And at this time, completing two times of complete one-dimensional DCT transformation to realize two-dimensional DCT transformation of the target image block, and converting the target image block into a frequency domain image block for representing frequency domain information of the target image block. Based on the mode, the complicated two-dimensional DCT related circuit structure is avoided being constructed in the FPGA, and the hardware complexity is reduced.
Two times of one-dimensional DCT inverse transformation:
the equation for the inverse one-dimensional DCT transform can be decomposed into 2 or 4 matrix vectors as follows:
decomposed into 2 matrix vectors:
Figure BDA0003207600290000081
Figure BDA0003207600290000082
decomposed into 4 matrix vectors:
Figure BDA0003207600290000083
Figure BDA0003207600290000084
Figure BDA0003207600290000085
Figure BDA0003207600290000086
taking decomposition into 4 matrix vectors as an example, as shown in fig. 6, 8 multipliers and 8 adders may be used to construct digital circuits for calculating the above equations (11) to (14), so as to facilitate the construction of corresponding circuits based on FPGA. In addition, when the matrix vector is decomposed into 2 matrix vectors, a digital circuit for calculating the above equations (9) and (10) needs to be constructed by using 11 multipliers and 17 adders, and the calculation time is longer.
The steps of specifically completing the one-dimensional DCT inverse transformation, the rank transposition and the one-dimensional DCT inverse transformation are similar to the steps of completing the two times of the one-dimensional DCT transformations, only corresponding formulas and digital circuits are needed to be replaced by the formulas and the digital circuits used for the one-dimensional DCT inverse transformation, and the details are not repeated here.
Further, step S222 may specifically include:
and segmenting each frequency domain image block according to each Gaussian coefficient in the two-dimensional Gaussian distribution, and adjusting each frequency domain part obtained by segmentation by adjusting each Gaussian coefficient so as to realize frequency domain noise reduction processing.
Specifically, fig. 7 is a schematic diagram of frequency domain distribution of an embodiment of a frequency domain image block in a two-dimensional gaussian distribution, as shown in fig. 7, a target image block 71 may be segmented into different frequency portions based on gaussian coefficients, where the target image block 71 is distributed with low-frequency image information, intermediate-frequency image information, and high-frequency image information from top left to bottom right, and information of a corresponding frequency domain portion may be processed correspondingly based on requirements. For example, the gaussian coefficients corresponding to the high-frequency image information may be reduced to reduce noise in the target image block, or the edge details in the high-frequency image information may be enhanced to improve the sharpness of the target image block.
Step S23: and acquiring an overlapping area and a non-overlapping area of each target image block.
Step S24: and calculating a first weighted pixel value of the overlapping area according to a first preset weight value. And calculating a second weighted pixel value of the non-overlapping area according to a second preset weight value. The first preset weight is determined by the number of the target image blocks corresponding to the overlapping area.
Optionally, the second preset weight and the first preset weight are in a multiple relationship, and the multiple value is the number value of the target image blocks corresponding to the first preset weight.
Specifically, the second preset weight may be several times of the first preset weight, for example, if the number of target image blocks having the overlap region corresponding to the overlap region is 4, the second preset weight may be 4 times of the first preset weight, and if the first preset weight is 1, the second preset weight is 4, each pixel value in the overlap region needs to be multiplied by 1 to obtain a first weighted pixel value, and each pixel value in the non-overlap region is multiplied by 4 to obtain a second weighted pixel value, and then when the mosaic synthesis is performed, a discontinuous phenomenon that one color is too heavy and one color is too light does not occur, so that smoothness of the reconstructed image is improved.
Optionally, the overlap area includes a first type of overlap area and a second type of overlap area, and a number value of target image blocks corresponding to the second type of overlap area is greater than a number value of target image blocks corresponding to the first type of overlap area; the first preset weight comprises a first type preset weight and a second type preset weight, the first type preset weight is determined by the number of target image blocks corresponding to the first type overlapping area, and the second type preset weight is determined by the number of target image blocks corresponding to the second type overlapping area;
the step S24 of calculating the first weighted pixel value of the overlap region according to the first preset weight value includes:
and calculating a first class weighted pixel value of the first class overlapping region according to the first class preset weight value, and calculating a second class weighted pixel value of the second class overlapping region according to the second class preset weight value.
And obtaining a first weighted pixel value of the overlapping area by using the first type weighted pixel value and the second type weighted pixel value.
Specifically, the overlap area may be divided into the first-type overlap area and the second-type overlap area, for example, there may be only 2 target image blocks having the first-type overlap area, and there may be 4 target image blocks having the second-type overlap area, at this time, it may be determined that the first-type preset weight is 2 times of the second-type preset weight according to the numbers of the target image blocks respectively corresponding to the first-type overlap area and the second-type overlap area, and the first preset weight is 4 times of the second-type preset weight, at this time, if the second-type preset weight is 1, the first-type preset weight is 2, and the first preset weight is 4.
Based on the above manner, corresponding weights can be assigned to the overlapping regions with different numbers of corresponding target image blocks, so as to further improve the smoothness of the reconstructed image.
Optionally, the step S24 of calculating a first weighted pixel value of the overlap region according to a first preset weight value includes:
determining a first preset weight according to the number of target image blocks corresponding to the overlapped area;
acquiring a historical pixel value of a last overlapping area according to a preset direction based on the number of target image blocks corresponding to the overlapping area;
and determining a first weighted pixel value of the overlapping area based on the product of the current pixel value of the overlapping area and the first preset weight value and the historical pixel value.
The preset direction may include a left direction, an upper direction and an upper left direction, a pixel value of a corresponding overlapping area before weighting processing in a target image block existing in the preset direction of the currently processed target image block is determined as a corresponding historical pixel value, and then the current pixel value of the overlapping area and the achievement of the first preset weight are summed with each historical pixel value to obtain a first weighted pixel value of the overlapping area.
In an application scenario, fig. 8 is a schematic region labeling diagram of fig. 2, and as shown in fig. 8, the diagram includes four target image blocks, specifically: a first image block comprising four areas 801, 802, 805 and 806, a second image block comprising six areas 802, 803, 804, 806, 807 and 808, a third image block comprising six areas 805, 806, 809, 810, 813 and 814, and a fourth image block comprising 806, 807, 808, 810, 811. 812, 814, 815 and 816 nine regions, 801, 803, 809 and 811 are non-overlapping regions, 802, 804, 805, 807, 810, 812, 813 and 815 are first-class overlapping regions, 806, 808, 814 and 816 are second-class overlapping regions, 2 target image blocks corresponding to the first-class overlapping regions, 4 target image blocks corresponding to the second-class overlapping regions, 4 first preset weights, 2 first-class preset weights, 1 third-class preset weights, and the preset directions include a left direction, an upper direction and an upper-left direction.
Taking the first image block, the second image block, the third image block, and the fourth image block as an example, the specific steps of calculating the first weighted pixel value of each overlapping area of each target image block are described as follows:
for the first image block, no overlapped target image block exists in the preset direction of the first image block. Therefore, the pixel values of the area 802 of the first image block are multiplied by the first type of preset weight to obtain the first weighted pixel value of the area 802 of the first image block. The pixel values of the area 805 of the first image block are multiplied by a first type of preset weight to obtain first weighted pixel values of the area 805 of the first image block. The pixel values of the area 806 of the first image block are multiplied by a second type of preset weight to obtain first weighted pixel values of the area 806 of the first image block.
For the second image block, there is an overlapped target image block in the left direction of the second image block, that is, the second image block overlaps the first image block, so the pixel value of the area 802 of the second image block is multiplied by the first type of preset weight, and then summed with the pixel value of the area 802 of the first image block to obtain the first weighted pixel value of the area 802 of the second image block. The pixel values of the area 806 of the second image block are multiplied with a second type of preset weight and then summed with the pixel values of the area 806 of the first image block to obtain a first weighted pixel value of the area 806 of the second image block. In addition, the areas 804, 807 and 808 are multiplied by the corresponding preset weights respectively to obtain the corresponding first weighted pixel values.
For the third image block, there is an overlapped target image block above the third image block, that is, the third image block overlaps the first image block, so that the pixel value of the area 805 of the third image block is multiplied by the first type of preset weight, and then summed with the pixel value of the area 805 of the first image block to obtain the first weighted pixel value of the area 805 of the third image block. The pixel values of the area 806 of the third image block are multiplied by a second type of preset weight and then summed with the pixel values of the area 806 of the first image block to obtain a first weighted pixel value of the area 806 of the third image block. In addition, the areas 810, 813 and 814 are multiplied by the corresponding preset weights respectively to obtain the corresponding first weighted pixel values.
For the fourth image block, the top-left direction of the fourth image block overlaps with the first image block, the top direction overlaps with the second image block, and the left direction overlaps with the third image block, so the pixel values of the region 806 of the fourth image block are multiplied by the second type of preset weight, and then summed with the pixel values of the regions 806 of the first, second, and third image blocks to obtain the first weighted pixel value of the region 806 of the fourth image block. The pixel values of the area 807 of the fourth image block are multiplied by a first type of preset weight, and then summed with the pixel values of the area 807 of the second image block to obtain a first weighted pixel value of the area 807 of the fourth image block. And multiplying the pixel value of the area 808 of the fourth image block by a second type of preset weight, and then summing the pixel values of the area 808 of the second image block to obtain a first weighted pixel value of the area 808 of the fourth image block. The pixel values of the area 810 of the fourth image block are multiplied by a first type of preset weight, and then summed with the pixel values of the area 810 of the third image block to obtain a first weighted pixel value of the area 810 of the fourth image block. The pixel values of the area 814 of the fourth image block are multiplied by a second type of preset weight and then summed with the pixel values of the area 814 of the third image block to obtain a first weighted pixel value of the area 814 of the fourth image block. In addition, the regions 812, 815 and 816 are multiplied by the corresponding preset weights respectively to obtain the corresponding first weighted pixel values.
Based on the above manner, the first weighted pixel value corresponding to each overlapping area of each target image block may be calculated, and the preset direction may also include any other direction, which may be determined according to the requirement, and is not limited herein.
Specifically, the weighting processing in steps S23 and S24 may be implemented by using an FPGA, and the block RAM with the cached 14-line data may be used to cache each target image block, and first the previously stored pixel values at the corresponding overlapping area position of the corresponding image block are read from the block RAM, that is, the corresponding historical pixel values are read, and then the pixel values at the corresponding overlapping area position, on which the weighting processing is currently performed, are summed based on a digital circuit, so as to obtain the corresponding first weighted pixel value, and the corresponding first weighted pixel value is stored in the block RAM. Based on the mode, the calculation and storage of the first weighted pixel values can be carried out on 8 × 8 target image blocks column by column, and the block RAM with 14 rows of cached data can realize that the first weighted pixel value in one column of one target image block is cached and the first weighted pixel value in one column of another cached target image block is output at the same time, so that the effects of improving the working efficiency and accelerating the image processing speed are achieved.
Step S25: and splicing all the weighted target image blocks according to the target image to obtain a reconstructed image.
Optionally, step S25 may specifically include:
and splicing all the weighted target image blocks according to the target image, and calculating the spliced target image according to a second preset weight to obtain a reconstructed image.
Specifically, since the target image blocks are weighted in step S24, the pixel values of the weighted image are larger than the original values, at this time, each target image block may be divided by the second preset weight corresponding to the non-overlapping area, and then the target images are spliced to obtain the reconstructed image, so as to improve the degree of restoration of the reconstructed image. If the first preset weight is 1 and the second preset weight is 4, dividing each target image block by 4 to obtain a target image block with higher reduction degree.
Different from the prior art, the target image is divided into a plurality of target image blocks by sliding the window which is based on the preset step length smaller than the height and/or width of the preset size and has the size of the preset size on the target image, so that the subsequent image reconstructed based on the plurality of target image blocks is smoother, and the blocking effect of the reconstructed image is reduced.
Fig. 9 is a schematic structural diagram of an embodiment of an electronic device according to the present application.
As shown in fig. 9, in a third embodiment, the present application proposes an electronic device, and an electronic device 90 of the present embodiment includes: a processor 91, a memory 92, and a bus 93.
The processor 91 and the memory 92 are respectively connected to the bus 93, the memory 92 stores program instructions, and the processor 91 is configured to execute the program instructions to implement the image blocking method or the image processing method in the above embodiments.
In the present embodiment, the processor 91 may also be referred to as a CPU (Central Processing Unit). The processor 91 may be an integrated circuit chip having signal processing capabilities. The processor 91 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor 91 may be any conventional processor or the like.
Different from the prior art, the target image is divided into a plurality of target image blocks by sliding the window which is based on the preset step length smaller than the height and/or width of the preset size and has the size of the preset size on the target image, so that the subsequent image reconstructed based on the plurality of target image blocks is smoother, and the blocking effect of the reconstructed image is reduced.
FIG. 10 is a schematic structural diagram of an embodiment of a computer-readable storage medium according to the present application.
As shown in fig. 10, the present application proposes a computer-readable storage medium 100, on which program instructions 101 are stored, and the program instructions 101, when executed by a processor (not shown), implement the image blocking method or the image processing method in the above-described embodiments.
The computer-readable storage medium 100 of the embodiment may be, but is not limited to, a usb disk, an SD card, a PD optical drive, a removable hard disk, a high-capacity floppy drive, a flash memory, a multimedia memory card, a server, a storage unit in an FPGA or an ASIC, and the like.
Different from the prior art, the target image is divided into a plurality of target image blocks by sliding the window which is based on the preset step length smaller than the height and/or width of the preset size and has the size of the preset size on the target image, so that the subsequent image reconstructed based on the plurality of target image blocks is smoother, and the blocking effect of the reconstructed image is reduced.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (12)

1. An image blocking method, comprising:
acquiring a target image;
sliding on the target image according to a preset step length by using a window with a preset size so as to divide the target image into a plurality of target image blocks;
wherein the preset step size is smaller than the height and/or width in the preset dimension.
2. The image blocking method according to claim 1, wherein the preset step size includes a preset horizontal step size and a preset vertical step size;
the sliding on the target image according to a preset step length by using a preset size window to divide the target image into a plurality of target image blocks comprises the following steps:
sliding in a horizontal direction according to the preset horizontal step length and/or sliding in a vertical direction according to the preset vertical step length on the target image by using a window with a preset size so as to divide the target image into a plurality of target image blocks;
and the preset horizontal step length is smaller than the width in the preset size, and/or the preset vertical step length is smaller than the height in the preset size.
3. An image processing method, comprising:
acquiring a plurality of target image blocks by using the image blocking method as claimed in claim 1 or 2;
carrying out noise reduction processing on each target image block;
acquiring an overlapping area and a non-overlapping area of each target image block;
calculating a first weighted pixel value of the overlapping area according to a first preset weight value;
calculating a second weighted pixel value of the non-overlapping area according to a second preset weight value;
the first preset weight is determined by the number of target image blocks corresponding to the overlapping area;
and splicing all the weighted target image blocks according to the target image to obtain a reconstructed image.
4. The image processing method according to claim 3, wherein the stitching all weighted target image blocks according to the target image to obtain a reconstructed image comprises:
and splicing all weighted target image blocks according to the target image, and calculating the spliced target image according to a second preset weight to obtain the reconstructed image.
5. The image processing method according to claim 3 or 4, wherein the second preset weight and the first preset weight are in a multiple relationship, and a numerical value of the multiple is a numerical value of the number of the target image blocks corresponding to the first preset weight.
6. The image processing method according to claim 3 or 4, wherein said calculating a first weighted pixel value of the overlap region according to a first preset weight value comprises:
determining the first preset weight according to the number of target image blocks corresponding to the overlapping area;
acquiring a historical pixel value of a last overlapping area according to a preset direction based on the number of target image blocks corresponding to the overlapping area;
and determining a first weighted pixel value of the overlapping area based on the product of the current pixel value of the overlapping area and the first preset weight value and the historical pixel value.
7. The image processing method according to claim 3, wherein said performing noise reduction processing on each of the target image blocks comprises:
performing frequency domain conversion on all the target image blocks to obtain each frequency domain image block;
performing frequency domain noise reduction processing on each frequency domain image block;
and performing inverse frequency domain conversion on all frequency domain image blocks subjected to the noise reduction processing to obtain each target image block subjected to the noise reduction processing.
8. The image processing method according to claim 7, wherein the frequency-domain converting all the target image blocks to obtain each frequency-domain image block comprises:
performing one-dimensional DCT (discrete cosine transformation) twice on all the target image blocks to obtain each frequency domain image block;
the frequency domain image blocks subjected to the noise reduction processing in the frequency domain are subjected to inverse frequency domain conversion to obtain each target image block subjected to the noise reduction processing, and the method comprises the following steps:
and performing two times of one-dimensional DCT inverse transformation on all frequency domain image blocks subjected to the noise reduction processing to obtain each target image block subjected to the noise reduction processing.
9. The image processing method according to claim 7 or 8, wherein the performing the frequency-domain noise reduction processing on each of the frequency-domain image blocks comprises:
and segmenting each frequency domain image block according to each Gaussian coefficient in the two-dimensional Gaussian distribution, and adjusting each frequency domain part obtained by segmentation by adjusting each Gaussian coefficient so as to realize frequency domain noise reduction processing.
10. The image processing method according to claim 3 or 4, wherein the overlap region includes a first type of overlap region and a second type of overlap region, and a number value of target image blocks corresponding to the second type of overlap region is greater than a number value of target image blocks corresponding to the first type of overlap region; the first preset weight comprises a first type preset weight and a second type preset weight, the first type preset weight is determined by the number of target image blocks corresponding to the first type overlapping area, and the second type preset weight is determined by the number of target image blocks corresponding to the second type overlapping area;
calculating a first weighted pixel value of the overlapping region according to a first preset weight value, including
Calculating a first class weighted pixel value of the first class overlapping region according to the first class preset weight, and calculating a second class weighted pixel value of the second class overlapping region according to the second class preset weight;
and obtaining a first weighted pixel value of the overlapping area by using the first type of weighted pixel value and the second type of weighted pixel value.
11. An electronic device, comprising: a memory and a processor;
the memory is for storing program instructions for execution by the processor to implement the method of any of claims 1-2 and/or the method of any of claims 3-10.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium stores program instructions which, when executed by a processor, implement the method of any of claims 1-2 and/or the method of any of claims 3-10.
CN202110921527.8A 2021-08-11 2021-08-11 Image blocking method, image processing method, electronic device, and storage medium Pending CN113724157A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110921527.8A CN113724157A (en) 2021-08-11 2021-08-11 Image blocking method, image processing method, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110921527.8A CN113724157A (en) 2021-08-11 2021-08-11 Image blocking method, image processing method, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN113724157A true CN113724157A (en) 2021-11-30

Family

ID=78675533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110921527.8A Pending CN113724157A (en) 2021-08-11 2021-08-11 Image blocking method, image processing method, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN113724157A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683045A (en) * 2016-09-28 2017-05-17 深圳市优象计算技术有限公司 Binocular camera-based panoramic image splicing method
CN108510445A (en) * 2018-03-30 2018-09-07 长沙全度影像科技有限公司 A kind of Panorama Mosaic method
CN109493281A (en) * 2018-11-05 2019-03-19 北京旷视科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN109493343A (en) * 2018-12-29 2019-03-19 上海鹰瞳医疗科技有限公司 Medical image abnormal area dividing method and equipment
CN110866881A (en) * 2019-11-15 2020-03-06 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN112188214A (en) * 2020-09-22 2021-01-05 展讯通信(上海)有限公司 Image processing method, system, electronic device, and medium
CN112686802A (en) * 2020-12-14 2021-04-20 北京迈格威科技有限公司 Image splicing method, device, equipment and storage medium
CN112883983A (en) * 2021-02-09 2021-06-01 北京迈格威科技有限公司 Feature extraction method and device and electronic system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683045A (en) * 2016-09-28 2017-05-17 深圳市优象计算技术有限公司 Binocular camera-based panoramic image splicing method
CN108510445A (en) * 2018-03-30 2018-09-07 长沙全度影像科技有限公司 A kind of Panorama Mosaic method
CN109493281A (en) * 2018-11-05 2019-03-19 北京旷视科技有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN109493343A (en) * 2018-12-29 2019-03-19 上海鹰瞳医疗科技有限公司 Medical image abnormal area dividing method and equipment
CN110866881A (en) * 2019-11-15 2020-03-06 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN112188214A (en) * 2020-09-22 2021-01-05 展讯通信(上海)有限公司 Image processing method, system, electronic device, and medium
CN112686802A (en) * 2020-12-14 2021-04-20 北京迈格威科技有限公司 Image splicing method, device, equipment and storage medium
CN112883983A (en) * 2021-02-09 2021-06-01 北京迈格威科技有限公司 Feature extraction method and device and electronic system

Similar Documents

Publication Publication Date Title
Acharya et al. Computational foundations of image interpolation algorithms.
KR101137753B1 (en) Methods for fast and memory efficient implementation of transforms
Lamberti et al. CMBFHE: a novel contrast enhancement technique based on cascaded multistep binomial filtering histogram equalization
JP2010003298A (en) Method for filtering image
JPH06245113A (en) Equipment for improving picture still more by removing noise and other artifact
KR20110065997A (en) Image processing apparatus and method of processing image
JP2010003297A (en) Method for filtering of image with bilateral filter and power image
Sajjad et al. Multi-kernel based adaptive interpolation for image super-resolution
Wu et al. Image autoregressive interpolation model using GPU-parallel optimization
US20090074320A1 (en) Image magnification device, image magnification method and computer readable medium storing an image magnification program
Shukla et al. Adaptive fractional masks and super resolution based approach for image enhancement
US20150324953A1 (en) Method and apparatus for performing single-image super-resolution
US8081830B2 (en) Enhancement of digital images
Muhammad et al. Image noise reduction based on block matching in wavelet frame domain
CN113724157A (en) Image blocking method, image processing method, electronic device, and storage medium
Hsin Saliency histogram equalisation and its application to image resizing
CN115797194A (en) Image denoising method, image denoising device, electronic device, storage medium, and program product
Ousguine et al. A new image interpolation using gradient-orientation and cubic spline interpolation
CN114549300A (en) Image dictionary generation method, image reconstruction method and related device
KR100998220B1 (en) Method for adaptive image resizing
CN112465719A (en) Transform domain image denoising method and system
Lee et al. GPU-based real-time super-resolution system for high-quality UHD video up-conversion
Khan et al. Realization of Balanced Contrast Limited Adaptive Histogram Equalization (B-CLAHE) for Adaptive Dynamic Range Compression of real time medical images
US20240135507A1 (en) Upsampling blocks of pixels
Chen et al. Structural similarity-based nonlocal edge-directed image interpolation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination