CN114510990A - Image processing method, image processing apparatus, electronic apparatus, and storage medium - Google Patents

Image processing method, image processing apparatus, electronic apparatus, and storage medium Download PDF

Info

Publication number
CN114510990A
CN114510990A CN202111591523.4A CN202111591523A CN114510990A CN 114510990 A CN114510990 A CN 114510990A CN 202111591523 A CN202111591523 A CN 202111591523A CN 114510990 A CN114510990 A CN 114510990A
Authority
CN
China
Prior art keywords
image
block
processed
preset
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111591523.4A
Other languages
Chinese (zh)
Inventor
粘春湄
方瑞东
林聚财
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202111591523.4A priority Critical patent/CN114510990A/en
Publication of CN114510990A publication Critical patent/CN114510990A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20052Discrete cosine transform [DCT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present application relates to an image processing method, an apparatus, an electronic apparatus, and a storage medium. The image processing method comprises the following steps: acquiring an image effect block to be processed in an image; determining the similarity between the image effect block to be processed and the image block in the preset search frame, and acquiring a target similar block, wherein the target similar block is the image block with the minimum similarity to the image effect block to be processed in the preset search frame; and under the condition that the similarity between the target similar block and the image effect block to be processed is greater than the preset similarity, maintaining the image information corresponding to the image effect block to be processed, wherein the image information comprises: pixel information and position information of the image effect block to be processed. By the method and the device, the problem of high complexity of the image algorithm in the related technology is solved, and the complexity of the image algorithm is reduced.

Description

Image processing method, image processing apparatus, electronic apparatus, and storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to an image processing method, an apparatus, an electronic apparatus, and a storage medium.
Background
Currently, in the coding standards, DCT (transform coding) technology is one of the important means as compression of images/videos. In DCT-based compression coding systems, if DCT is performed on the entire image, the amount of computation is considerable because the DCT coefficients after transformation are associated with each pixel in the image. Therefore, the adopted DCT transform is based on pixel blocks, namely, the image is firstly divided into 8 × 8 pixel blocks, and then each block is DCT transformed to obtain 64 DCT coefficients, so that the operation amount is greatly reduced. However, since each block is separately DCT-transformed, the correlation between pixel blocks is neglected. When the DCT coefficient of each block is quantized, the DCT coefficient is divided by the quantization coefficient and then rounded, and some high-frequency components which have little influence on the image are discarded, so that the aim of reducing the code rate is fulfilled. However, if the quantization is coarse, a large amount of high frequency information at the block edge is lost, which causes discontinuous jump at the boundary of the block in the reconstructed image, which is the blocking effect.
In the scheme for solving the image deblocking effect in the related art, low-rank matrix recovery is used as a means, the redundancy of an image can be fully utilized, the blocking effect is effectively eliminated, and the problem of high complexity of an image algorithm is caused by adopting the low-rank matrix recovery.
Aiming at the problem of high complexity of an image algorithm in the related technology, no effective solution is provided at present.
Disclosure of Invention
In the present embodiment, an image processing method, an apparatus, an electronic apparatus, and a storage medium are provided to at least reduce the complexity of an image algorithm in an encoding technique.
In a first aspect, there is provided an image processing method in the present embodiment, including:
acquiring an image effect block to be processed in an image;
determining the similarity between the image effect block to be processed and an image block in a preset search frame, and acquiring a target similar block, wherein the target similar block is the image block with the minimum similarity with the image effect block to be processed in the preset search frame;
and under the condition that the similarity between the target similar block and the image effect block to be processed is greater than a preset similarity, maintaining image information corresponding to the image effect block to be processed, wherein the image information comprises: pixel information and position information of the image effect block to be processed.
In some of these embodiments, the method further comprises:
under the condition that the similarity between the target similar block and the image effect block to be processed is smaller than the preset similarity, selecting a preset number of image blocks, and constructing a matrix according to the similarity between the preset number of similar blocks and the image effect block to be processed;
processing the matrix based on a singular value threshold algorithm to obtain a target matrix;
and performing effect removing processing on the image effect block to be processed according to the target matrix.
In some of these embodiments, the image comprises a YUV format image; selecting the preset number of image blocks, and constructing a matrix according to the similarity between the preset number of similar blocks and the image effect block to be processed, wherein the matrix comprises the following steps:
acquiring preset YUV parameter information, wherein the preset YUV parameter information comprises: the parameter value of the Y parameter, the parameter value of the U parameter and the parameter value of the V parameter; the parameter values comprise pixel size and pixel number, the parameter value of the U parameter is 1/2 of the parameter value of the Y parameter, and the parameter value of the V parameter is 1/2 of the parameter value of the Y parameter;
and selecting the preset number of image blocks according to the preset YUV parameter information, and constructing a matrix according to the similarity between the preset number of similar blocks and the image effect block to be processed.
In some embodiments, after performing de-effect processing on the image effect block to be processed according to the target matrix, the method further includes:
obtaining an image characteristic value of the image effect block to be processed after effect removing processing is carried out;
and adjusting the image characteristic value to be within a preset range.
In some embodiments, selecting the preset number of image blocks, and constructing a matrix according to the similarity between the preset number of similar blocks and the image effect block to be processed includes:
determining the distance between the image effect block to be processed and the image block in the preset search box;
matching image blocks outside a preset distance range in a downsampling mode with a preset step length to obtain a first number of similar blocks, and matching the image blocks within the preset distance range one by one to obtain a second number of similar blocks;
and under the condition that the sum of the first number and the second number is larger than the preset number, selecting the preset number of similar blocks from the first number of similar blocks and the second number of similar blocks, and constructing a matrix according to the similarity between the preset number of similar blocks and the image effect block to be processed.
In some of these embodiments, the method further comprises:
and under the condition that the sum of the first number and the second number is smaller than the preset number, expanding the search frame according to a preset proportion and searching similar blocks.
In some embodiments, before determining a similarity between the to-be-processed image effect block and an image block in a preset search box and acquiring a target similar block with a minimum similarity to the to-be-processed image effect block, the method further includes:
detecting whether an image block with a blocking effect exists in the preset search frame;
and under the condition that the image blocks with the block effect exist in the preset search frame, skipping the similarity calculation of the image effect blocks to be processed and the image blocks with the block effect.
In some embodiments, determining the similarity between the to-be-processed image effect block and an image block in a preset search box, and acquiring a target similar block with the minimum similarity to the to-be-processed image effect block includes:
clustering the image effect blocks to be processed in the search box to obtain clustering results;
according to the clustering result, determining the same type image blocks and non-same type image blocks of the image effect block to be processed;
determining the similarity between the image effect block to be processed and the image block of the same type, and selecting a preset number according to the sequence of the similarity from small to large to construct the matrix.
In a second aspect, there is provided in the present embodiment an image processing apparatus comprising:
the first acquisition module is used for acquiring an image effect block to be processed in an image;
the determining module is used for determining the similarity between the image effect block to be processed and an image block in a preset search frame and acquiring a target similar block, wherein the target similar block is the image block with the minimum similarity with the image effect block to be processed in the preset search frame;
a first processing module, configured to maintain image information corresponding to the image effect block to be processed when a similarity between the target similar block and the image effect block to be processed is greater than a preset similarity, where the image information includes: pixel information and position information of the image effect block to be processed.
In a third aspect, in the present embodiment, there is provided an electronic apparatus, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the image processing method according to the first aspect when executing the computer program.
In a fourth aspect, in the present embodiment, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method of the first aspect described above.
Compared with the related art, the image processing method, the image processing device, the electronic device and the storage medium provided in the embodiment are realized by acquiring the image effect blocks to be processed in the image; determining the similarity between the image effect block to be processed and the image block in the preset search frame, and acquiring a target similar block, wherein the target similar block is the image block with the minimum similarity to the image effect block to be processed in the preset search frame; and under the condition that the similarity between the target similar block and the image effect block to be processed is greater than the preset similarity, maintaining the image information corresponding to the image effect block to be processed, wherein the image information comprises: the pixel information and position information of the image effect block to be processed can avoid the problem of high complexity of an image algorithm caused by the low-rank matrix recovery processing of the current image block which is completely irrelevant to the surrounding image blocks, thereby reducing the complexity of the image algorithm.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a terminal of the image processing method of the present embodiment;
fig. 2 is a flowchart of an image processing method of the present embodiment;
fig. 3 is a block diagram of the image processing apparatus of the present embodiment.
Detailed Description
For a clearer understanding of the objects, technical solutions and advantages of the present application, reference is made to the following description and accompanying drawings.
Unless defined otherwise, technical or scientific terms used herein shall have the same general meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The use of the terms "a" and "an" and "the" and similar referents in the context of this application do not denote a limitation of quantity, either in the singular or the plural. The terms "comprises," "comprising," "has," "having" and any variations thereof, as referred to in this application, are intended to cover non-exclusive inclusions; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or modules, but may include other steps or modules (elements) not listed or inherent to such process, method, article, or apparatus. Reference throughout this application to "connected," "coupled," and the like is not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference to "a plurality" in this application means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. In general, the character "/" indicates a relationship in which the objects associated before and after are an "or". The terms "first," "second," "third," and the like in this application are used for distinguishing between similar items and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the present embodiment may be executed in a terminal, a computer, or a similar computing device. For example, the image processing method is executed on a terminal, and fig. 1 is a block diagram of a hardware configuration of the terminal according to the image processing method of the present embodiment. As shown in fig. 1, the terminal may include one or more processors 102 (only one shown in fig. 1) and a memory 104 for storing data, wherein the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA. The terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is merely an illustration and is not intended to limit the structure of the terminal described above. For example, the terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 can be used for storing computer programs, for example, software programs and modules of application software, such as a computer program corresponding to the image processing method in the present embodiment, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, so as to implement the above-mentioned method. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. The network described above includes a wireless network provided by a communication provider of the terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
First, it should be noted that the occurrence of the blocking artifacts is mainly caused by quantization errors after block quantization, but the blocking artifacts have different expressions in the image domain according to different image contents, and there are two types of the blocking artifacts:
the trapezoidal block effect: occurring at the strong edges of the image. At low code rates, many of the high frequency coefficients of the DCT are quantized to zero, and as a result, the high frequency components associated with strong edges are not fully represented in the transform domain. And because the image blocks are processed respectively, the continuity of strong edges passing through the block boundaries cannot be ensured, and sawtooth-shaped noise appears at the image edges, which is called as a trapezoidal block effect.
Lattice blocking: mostly in flat areas of the image. In the transform domain, the DC component DC coefficient represents the average luminance of the image block, so this coefficient contains most of the energy of the image block. In flat regions, the luminance variation is small, but if there is an increase or decrease in luminance, rounding up the quantization may cause the DC coefficient to cross the decision threshold of the adjacent quantization level, resulting in abrupt luminance changes at the block boundaries in the reconstructed image, which visually appears as a slice-shaped contour in the flat region, and this noise is called "lattice blocking".
In order to solve the problem of blocking artifacts, in the related art, low rank matrix recovery is mainly employed. The low-rank matrix recovery is used as a means, the redundancy of images can be fully utilized, and the blocking effect is effectively eliminated. The basic principle is as follows: since the rank of an image without blocking effect is usually very small, because the correlation of the same region is very strong, in the image matrix, the current vector can be represented by other vectors, so that after singular value decomposition, the number of non-zero rows is small, and the rank is low. After the block effect is introduced, the original correlation can be damaged, so that the vectors in the image matrix tend to be linearly independent, at the moment, after singular value decomposition, the number of non-zero rows is large, and the rank is high. And the low-rank matrix recovery is to utilize the characteristic to perform threshold algorithm processing on singular values, so that the rank of the image is reduced, the blocking effect is reduced, and the quality of the image is closer to the quality of the original image. For low-rank matrix recovery in the related art, research shows that singular value algorithm calculation needs to be performed on image blocks in a matrix each time when low-rank matrix recovery is performed, and the problem of excessive calculation amount and high complexity of the algorithm caused by the fact that the calculation is performed each time.
Therefore, in order to solve the problem of high complexity of image algorithm in the related art, an image processing method is provided in the present embodiment, and fig. 2 is a flowchart of the image processing method of the present embodiment, as shown in fig. 2, the flowchart includes the following steps:
step S201, obtaining an image effect block to be processed in the image.
In this step, the image may be an image captured in real time or may be acquired from a database in which images are stored. The image effect block to be processed in the image is a blocky image generated by block effect, wherein the block effect is the effect that transform coding based on the block is widely applied to image compression coding, the quantization becomes rough along with the reduction of code rate, discontinuity can occur at the boundary of the block, and obvious defects of a reconstructed image are formed.
Step S202, determining the similarity between the image effect block to be processed and the image block in the preset search frame, and obtaining a target similar block, wherein the target similar block is the image block with the minimum similarity with the image effect block to be processed in the preset search frame.
In this step, the preset search box may be established with the to-be-processed image effect block as the center, or may be set by the user himself, and the influence of an irrelevant image block far away from the to-be-processed image effect block in the image may be eliminated by determining the similarity between the to-be-processed image effect block and the image block in the preset search box.
It should be noted that all image blocks in the preset search box may be similar blocks, that is, image blocks having certain similarity to the image effect block to be processed.
Step S203, when the similarity between the target similar block and the image effect block to be processed is greater than the preset similarity, maintaining image information corresponding to the image effect block to be processed, where the image information includes: pixel information and position information of the image effect block to be processed.
In this step, when the similarity between the target similar block and the image effect block to be processed is greater than the preset similarity, the image effect block to be processed is directly used as the target image effect block, the image information corresponding to the image effect block to be processed is maintained, and the image effect block to be processed does not need to be subjected to the singular value algorithm processing of low-rank matrix recovery, so that the current image block completely unrelated to the surrounding image blocks can be prevented from being subjected to the low-rank matrix recovery processing, and the complexity of the image algorithm is reduced.
Based on the above steps S201 to S203, when the similarity between the target similar block and the image effect block to be processed is greater than the preset similarity, maintaining image information corresponding to the image effect block to be processed, where the image information includes: the pixel information and the position information of the image effect block to be processed do not need to be processed by the low-rank matrix recovery singular value algorithm, so that the current image block which is completely irrelevant to the surrounding image blocks can be prevented from being processed by the low-rank matrix recovery, and the complexity of the image algorithm is reduced.
In some embodiments, a preset number of image blocks can be selected under the condition that the similarity between the target similar block and the image effect block to be processed is smaller than a preset similarity, and a matrix is constructed according to the similarity between the preset number of similar blocks and the image effect block to be processed; processing the matrix based on a singular value threshold algorithm to obtain a target matrix; and performing effect removing treatment on the image effect block to be treated according to the target matrix.
In the embodiment, a target matrix is obtained by processing a matrix through a singular value threshold algorithm under the condition that the similarity between a target similar block and an image effect block to be processed is smaller than a preset similarity; and finally, according to the target matrix, the image effect block to be processed is subjected to effect removing treatment, so that the effect of the image effect block to be processed can be removed, and the image quality is improved.
In some of these embodiments, the image comprises a YUV format image; selecting a preset number of image blocks, and constructing a matrix according to the similarity between the preset number of similar blocks and the image effect block to be processed, wherein the matrix comprises the following steps: acquiring preset YUV parameter information, wherein the preset YUV parameter information comprises: the parameter value of the Y parameter, the parameter value of the U parameter and the parameter value of the V parameter; the parameter values comprise the pixel size and the pixel number, the parameter value of the U parameter is 1/2 of the parameter value of the Y parameter, and the parameter value of the V parameter is 1/2 of the parameter value of the Y parameter; and selecting a preset number of image blocks according to preset YUV parameter information, and constructing a matrix according to the similarity between the preset number of similar blocks and the image effect block to be processed.
In this embodiment, when performing singular value threshold algorithm processing on a low-rank matrix, the method for respectively processing images in a YUV format by using three YUV preset YUV parameters is as follows:
mode 1: and meanwhile, a low-rank matrix recovery method is adopted for YUV components, the total quantity of pixels of the UV component is 1/2 of the Y component, so that the size of an image block to be processed, the size of the image block, the number of the image blocks and the size of a searching window are reduced to 1/2 of the Y component, and edges are not required to be protected in an edge detection mode because the texture of the UV component is not rich in the Y component.
Mode 2: and simultaneously, a low-rank matrix recovery method is adopted for YUV components, wherein the texture of the Y component is richer, so that after the Y component is processed, the position of a similar block searched by the Y component can be recorded, and when the UV component is processed, the position is directly used for guiding the UV component to obtain the position of the corresponding similar block, and the number of the pixel points of the UV component is 1/2 of the Y component, so that the recorded position is firstly mapped to the corresponding position on the UV component, and then the recorded position needs to be divided by a corresponding multiple to obtain the position of the UV component similar block.
Mode 3: meanwhile, a low-rank matrix recovery method is adopted for YUV components, and alternating iteration is carried out on UV components by adopting the iteration times (for example, 1 time) far less than that of Y components.
By the method, the YUV three components are respectively processed for the image in the YUV format, and different parameters are formulated according to the characteristics of the YUV components, so that the image deblocking effect is improved, and the algorithm complexity is reduced.
In some embodiments, after the image effect block to be processed is subjected to the effect removing processing according to the target matrix, an image characteristic value of the image effect block to be processed after the effect removing processing is performed can be obtained; and adjusting the image characteristic value to be within a preset range.
In the process of performing low-rank matrix recovery of a singular value threshold algorithm on an image effect block, some standard characteristic values of an image may change, so that the problem of poor image quality after the singular value threshold algorithm is caused, and therefore in the embodiment, the image characteristic value of the image effect block to be processed after effect removing processing is obtained; the image edge can be protected and the overall image quality can be improved by adjusting the image characteristic value to be within the preset range.
It should be noted that, the image characteristic value of the image effect block to be processed after the effect removing processing is performed is obtained; adjusting the image feature value to be within the preset range may be performed in the following manner:
in the first mode, an edge detection method is adopted to protect the edge, so that deblocking processing at the edge is avoided, and the edge is prevented from being blurred. The general edge detection algorithm adopts sobel edge detection, and the specific formula is as follows:
Gx=[Pi+1,j-1+2*Pi+1,j+Pi+1,j+1]-[Pi-1,j-1+2*Pi-1,j+Pi-1,j-1]
Gy=[Pi-1,j-1+2*Pi,j-1+Pi+1,j-1]-[Pi-1,j+1+2*Pij+1+Pi+1,j+1]
Figure BDA0003429307890000081
|G|=|Gx|+||Gy||
wherein, Pi,jIs a pixel to be detected, GxAnd GyThe gradient is calculated, if G is greater than thr, the pixel is marked as an edge point, thr is a self-defined constant, and (i, j) is the coordinate of the pixel inside the image effect block to be processed.
And in a second mode, performing expansion processing on the edge subjected to low-rank matrix recovery to enhance the edge information, wherein the principle of an expansion algorithm is as follows:
if more than m pixels in the range of the size of nxn of the current point are edge points, the current point is also marked as the edge point, wherein n is 3, 5 and 7; and m is 4, 10 and 20.
In some embodiments, selecting a preset number of image blocks, and constructing a matrix according to the similarity between the preset number of similar blocks and the image effect block to be processed includes: determining the distance between an image effect block to be processed and an image block in a preset search frame; matching image blocks outside a preset distance range in a downsampling mode with a preset step length to obtain a first number of similar blocks, and matching the image blocks within the preset distance range one by one to obtain a second number of similar blocks; and under the condition that the sum of the first number and the second number is greater than the preset number, selecting the preset number of similar blocks from the first number of similar blocks and the second number of similar blocks, and constructing a matrix according to the similarity between the preset number of similar blocks and the image effect block to be processed.
In this embodiment, by performing matching in a coarse-to-fine manner, a downsampling matching block searching manner is adopted for blocks that are far away from the spatial domain or the temporal domain of the current block, a first number of similar blocks are obtained by sampling 2, 3, or 4 pixels, and a second number of similar blocks are obtained by pixel-by-pixel searching for blocks that are close to the current image block (for example, within 3-5 pixels from the top, bottom, left, and right of the current image block), and finally a preset number of similar blocks are selected from the first number of similar blocks and the second number of similar blocks according to the condition that the sum of the first number of similar blocks and the second number of similar blocks is greater than the preset number, and a matrix is constructed according to the similarity between the preset number of similar blocks and the image effect block to be processed, so that the problem that the matched similar blocks that are far away have noise blocks can be avoided by pixel-by-pixel matching, therefore, the denoising effect of the image block effect is improved.
In some embodiments, the search frame may be further expanded according to a preset ratio and the similar block search may be performed when the sum of the first number and the second number is smaller than a preset number.
In this embodiment, when the sum of the first number and the second number is smaller than the preset number, the accuracy of matching the similar blocks may be improved by expanding the search frame according to a preset ratio and performing similar block search.
In some embodiments, before determining the similarity between the image block to be processed and the image block in the preset search frame and acquiring the target similar block, whether the image block with the block effect exists in the preset search frame may also be detected; and in the case of detecting that the image block with the block effect exists in the preset search frame, skipping the similarity calculation of the image block to be processed and the image block with the block effect.
In this embodiment, the influence of some irrelevant effect blocks is avoided by excluding the matching of the similarity between the effect block and the image effect block to be processed, so as to achieve the effect of improving the deblocking noise.
In some embodiments, determining the similarity between the image effect block to be processed and the image block in the preset search box, and acquiring the target similar block includes: clustering image effect blocks to be processed in a search box to obtain a clustering result; according to the clustering result, determining the same type image blocks and non-same type image blocks of the image effect blocks to be processed; determining the similarity between the effect block of the image to be processed and the image block of the same type, and selecting a preset number according to the sequence of the similarity from small to large to construct a matrix.
In this embodiment, an image segmentation method is adopted to skip the calculation of similarity for similar blocks that are not in the same content area, wherein the image segmentation method is as follows:
clustering is performed centering on k points in space, classifying the objects closest to them. And (4) gradually updating the value of each clustering center through an iterative method until the best clustering result is obtained.
The method comprises the following basic steps:
step 1, properly selecting initial centers of c classes;
step 2, in the kth iteration, the distance from any sample to c centers is calculated, and the sample is classified into the class where the center with the shortest distance is located;
step 3, updating the center value of the class by using methods such as an average value and the like;
and 4, for all c clustering centers, updating by adopting the iterative methods of the steps 2 and 3, keeping the values unchanged, ending the iteration, and otherwise, continuing the iteration.
By the clustering mode, the matching of non-homogeneous image blocks is eliminated, the influence of the non-homogeneous image blocks is avoided, and the image deblocking effect is improved.
In some embodiments, before determining the similarity between the image effect block to be processed and the image blocks in the preset search frame and selecting a first preset number of image blocks to construct a matrix according to the sequence of the similarity from small to large, it may be further determined whether the number of the image blocks in the preset search frame is greater than a second preset number; and expanding the search frame according to a preset proportion when the number of the image blocks in the preset search frame is judged to be smaller than a second preset number.
In this embodiment, by determining that the number of image blocks in the preset search frame is smaller than the second preset number, the search frame is enlarged according to the preset ratio, so that the accuracy of matching similar blocks can be improved.
In this embodiment, an image processing apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and the description of the apparatus is omitted for brevity. The terms "module," "unit," "subunit," and the like as used below may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware or a combination of software and hardware is also possible and contemplated.
Fig. 3 is a block diagram showing the configuration of an image processing apparatus of the present embodiment, which includes, as shown in fig. 3:
a first obtaining module 31, configured to obtain an image effect block to be processed in an image;
a determining module 32, coupled to the first obtaining module 31, configured to determine a similarity between the to-be-processed image effect block and an image block in the preset search frame, and obtain a target similar block, where the target similar block is an image block in the preset search frame with the smallest similarity to the to-be-processed image effect block;
the first processing module 33, coupled to the determining module 32, is configured to maintain image information corresponding to the image effect block to be processed, when the similarity between the target similar block and the image effect block to be processed is greater than a preset similarity, where the image information includes: pixel information and position information of the image effect block to be processed.
In some of these embodiments, the apparatus further comprises: the construction module is used for selecting a preset number of image blocks under the condition that the similarity between the target similar block and the image effect block to be processed is smaller than a preset similarity, and constructing a matrix according to the similarity between the preset number of similar blocks and the image effect block to be processed; the second processing module is used for processing the matrix based on a singular value threshold algorithm to obtain a target matrix; and the third module is used for performing effect removing processing on the image effect block to be processed according to the target matrix.
In some of these embodiments, where the image comprises a YUV format image, the construction module comprises: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring preset YUV parameter information, and the preset YUV parameter information comprises: the parameter value of the Y parameter, the parameter value of the U parameter and the parameter value of the V parameter; the parameter values comprise the pixel size and the pixel number, the parameter value of the U parameter is 1/2 of the parameter value of the Y parameter, and the parameter value of the V parameter is 1/2 of the parameter value of the Y parameter; and the construction unit is used for selecting a preset number of image blocks according to the preset YUV parameter information and constructing a matrix according to the similarity between the preset number of similar blocks and the image effect block to be processed.
In some of these embodiments, the apparatus further comprises: the second acquisition module is used for acquiring the image characteristic value of the image effect block to be processed after the effect removing processing is carried out; and the adjusting module is used for adjusting the image characteristic value to a preset range.
In some of these embodiments, the determination module 32 includes: the determining unit is used for determining the distance between the image effect block to be processed and the image block in the preset search frame; the matching unit is used for matching the image blocks outside the preset distance range in a downsampling mode with preset step length to obtain a first number of similar blocks, and matching the image blocks within the preset distance range one by one to obtain a second number of similar blocks; and the first construction unit is used for selecting the similar blocks with the preset number from the similar blocks with the first number and the similar blocks with the second number under the condition that the sum of the first number and the second number is larger than the preset number, and constructing a matrix according to the similarity between the similar blocks with the preset number and the image effect blocks to be processed.
In some of these embodiments, the apparatus further comprises: and the searching module is used for expanding the searching frame according to a preset proportion and searching similar blocks under the condition that the sum of the first quantity and the second quantity is smaller than the preset quantity.
In some of these embodiments, the apparatus further comprises: the detection module is used for judging and detecting whether an image block with a blocking effect exists in a preset search frame; and the calculating module is used for skipping the similarity calculation of the image block with the block effect to be processed and the image block with the block effect under the condition that the image block with the block effect is detected to exist in the preset search frame.
In some of these embodiments, the determination module 32 includes: the clustering unit is used for clustering the image effect blocks to be processed in the search box to obtain a clustering result; the determining unit is used for determining the same type image blocks and non-same type image blocks of the image effect blocks to be processed according to the clustering result; and the second construction unit is used for determining the similarity between the effect block of the image to be processed and the image block of the same type and selecting a preset number of construction matrixes according to the sequence of the similarity from small to large to construct the matrixes.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
There is also provided in this embodiment an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
step S1, acquiring the image effect block to be processed in the image.
Step S2, determining a similarity between the image effector block to be processed and the image block in the preset search frame, and obtaining a target similar block, where the target similar block is the image block in the preset search frame with the smallest similarity to the image effector block to be processed.
Step S3, when the similarity between the target similar block and the image effect block to be processed is greater than the preset similarity, maintaining image information corresponding to the image effect block to be processed, where the image information includes: pixel information and position information of the image effect block to be processed.
It should be noted that, for specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiments and optional implementations, and details are not described again in this embodiment.
In addition, in combination with the image processing method provided in the foregoing embodiment, a storage medium may also be provided to implement this embodiment. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any of the image processing methods in the above embodiments.
It should be understood that the specific embodiments described herein are merely illustrative of this application and are not intended to be limiting. All other embodiments, which can be derived by a person skilled in the art from the examples provided herein without any inventive step, shall fall within the scope of protection of the present application.
It is obvious that the drawings are only examples or embodiments of the present application, and it is obvious to those skilled in the art that the present application can be applied to other similar cases according to the drawings without creative efforts. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
The term "embodiment" is used herein to mean that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly or implicitly understood by one of ordinary skill in the art that the embodiments described in this application may be combined with other embodiments without conflict.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the patent protection. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (11)

1. An image processing method, comprising:
acquiring an image effect block to be processed in an image;
determining the similarity between the image effect block to be processed and an image block in a preset search frame, and acquiring a target similar block, wherein the target similar block is the image block with the minimum similarity with the image effect block to be processed in the preset search frame;
and under the condition that the similarity between the target similar block and the image effect block to be processed is greater than a preset similarity, maintaining image information corresponding to the image effect block to be processed, wherein the image information comprises: pixel information and position information of the image effect block to be processed.
2. The image processing method according to claim 1, characterized in that the method further comprises:
under the condition that the similarity between the target similar block and the image effect block to be processed is smaller than the preset similarity, selecting a preset number of image blocks, and constructing a matrix according to the similarity between the preset number of similar blocks and the image effect block to be processed;
processing the matrix based on a singular value threshold algorithm to obtain a target matrix;
and performing effect removing processing on the image effect block to be processed according to the target matrix.
3. The image processing method according to claim 2, wherein the image comprises a YUV format image; selecting the preset number of image blocks, and constructing a matrix according to the similarity between the preset number of similar blocks and the image effect block to be processed, wherein the matrix comprises the following steps:
acquiring preset YUV parameter information, wherein the preset YUV parameter information comprises: the parameter value of the Y parameter, the parameter value of the U parameter and the parameter value of the V parameter; the parameter values comprise pixel size and pixel number, the parameter value of the U parameter is 1/2 of the parameter value of the Y parameter, and the parameter value of the V parameter is 1/2 of the parameter value of the Y parameter;
and selecting the preset number of image blocks according to the preset YUV parameter information, and constructing a matrix according to the similarity between the preset number of similar blocks and the image effect block to be processed.
4. The image processing method according to claim 2, wherein after performing de-effect processing on the image effect block to be processed according to the object matrix, the method further comprises:
obtaining an image characteristic value of the image effect block to be processed after effect removing processing is carried out;
and adjusting the image characteristic value to be within a preset range.
5. The image processing method according to claim 2, wherein selecting the preset number of image blocks, and constructing a matrix according to the similarity between the preset number of similar blocks and the image effect block to be processed comprises:
determining the distance between the image effect block to be processed and the image block in the preset search box;
matching image blocks outside a preset distance range in a downsampling mode with a preset step length to obtain a first number of similar blocks, and matching the image blocks within the preset distance range one by one to obtain a second number of similar blocks;
and under the condition that the sum of the first number and the second number is larger than the preset number, selecting the preset number of similar blocks from the first number of similar blocks and the second number of similar blocks, and constructing a matrix according to the similarity between the preset number of similar blocks and the image effect block to be processed.
6. The image processing method according to claim 5, characterized in that the method further comprises:
and under the condition that the sum of the first number and the second number is smaller than the preset number, expanding the search frame according to a preset proportion and searching similar blocks.
7. The image processing method according to claim 1, wherein before determining the similarity between the image effect block to be processed and the image block in a preset search box and obtaining the target similar block, the method further comprises:
detecting whether an image block with a blocking effect exists in the preset search frame;
and under the condition that the image blocks with the block effect exist in the preset search frame, skipping the similarity calculation of the image effect blocks to be processed and the image blocks with the block effect.
8. The image processing method according to claim 2, wherein determining the similarity between the image effect block to be processed and the image block in the preset search box, and obtaining the target similar block comprises:
clustering the image effect blocks to be processed in the search box to obtain clustering results;
according to the clustering result, determining the same type image blocks and non-same type image blocks of the image effect block to be processed;
determining the similarity between the image effect block to be processed and the image block of the same type, and selecting a preset number according to the sequence of the similarity from small to large to construct the matrix.
9. An image processing apparatus characterized by comprising:
the first acquisition module is used for acquiring an image effect block to be processed in an image;
the determining module is used for determining the similarity between the image effect block to be processed and the image block in a preset search frame and acquiring a target similar block with the minimum similarity with the image effect block to be processed;
a processing module, configured to keep image information corresponding to the image effect block to be processed when a similarity between the target similar block and the image effect block to be processed is greater than a preset similarity, where the image information includes: pixel information and position information of the image effect block to be processed.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the image processing method according to any one of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the image processing method of any one of claims 1 to 8.
CN202111591523.4A 2021-12-23 2021-12-23 Image processing method, image processing apparatus, electronic apparatus, and storage medium Pending CN114510990A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111591523.4A CN114510990A (en) 2021-12-23 2021-12-23 Image processing method, image processing apparatus, electronic apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111591523.4A CN114510990A (en) 2021-12-23 2021-12-23 Image processing method, image processing apparatus, electronic apparatus, and storage medium

Publications (1)

Publication Number Publication Date
CN114510990A true CN114510990A (en) 2022-05-17

Family

ID=81548113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111591523.4A Pending CN114510990A (en) 2021-12-23 2021-12-23 Image processing method, image processing apparatus, electronic apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN114510990A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170893A (en) * 2022-08-29 2022-10-11 荣耀终端有限公司 Training method of common-view gear classification network, image sorting method and related equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170893A (en) * 2022-08-29 2022-10-11 荣耀终端有限公司 Training method of common-view gear classification network, image sorting method and related equipment

Similar Documents

Publication Publication Date Title
RU2553085C2 (en) Video encoding/decoding methods, video encoding/decoding devices and programmes therefor
Zhang et al. Low-rank decomposition-based restoration of compressed images via adaptive noise estimation
US9196021B2 (en) Video enhancement using related content
US10469876B2 (en) Non-local adaptive loop filter combining multiple denoising technologies and grouping image patches in parallel
US20120207367A1 (en) Alignment of an ordered stack of images from a specimen
Chao et al. On the design of a novel JPEG quantization table for improved feature detection performance
US8155213B2 (en) Seamless wireless video transmission for multimedia applications
RU2706228C1 (en) Scanning order selection method and device
CN111445424B (en) Image processing method, device, equipment and medium for processing mobile terminal video
US7054503B2 (en) Image processing system, image processing method, and image processing program
Mori et al. Fast template matching based on normalized cross correlation using adaptive block partitioning and initial threshold estimation
CN110796615A (en) Image denoising method and device and storage medium
US8644636B2 (en) Method and apparatus for removing image blocking artifact by using transformation coefficient
CN107360435B (en) Blockiness detection methods, block noise filtering method and device
US9712828B2 (en) Foreground motion detection in compressed video data
CN113012061A (en) Noise reduction processing method and device and electronic equipment
Kuang et al. Fast mode decision algorithm for HEVC screen content intra coding
CN114510990A (en) Image processing method, image processing apparatus, electronic apparatus, and storage medium
CN110536138B (en) Lossy compression coding method and device and system-on-chip
JP2006517069A (en) Motion vector prediction method and system
US20160219297A1 (en) Method and system for block matching based motion estimation
Lee et al. Compressed domain video saliency detection using global and local spatiotemporal features
CN105992012B (en) Error concealment method and device
US9414067B2 (en) Methods and systems for detection of block based video dropouts
US9215474B2 (en) Block-based motion estimation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination