CN112334942A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN112334942A
CN112334942A CN201980039065.8A CN201980039065A CN112334942A CN 112334942 A CN112334942 A CN 112334942A CN 201980039065 A CN201980039065 A CN 201980039065A CN 112334942 A CN112334942 A CN 112334942A
Authority
CN
China
Prior art keywords
image
filtering
blurring
region
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980039065.8A
Other languages
Chinese (zh)
Inventor
李恒杰
赵文军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN112334942A publication Critical patent/CN112334942A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/70

Abstract

Provided are an image processing method and apparatus. The method comprises the following steps: acquiring an image to be processed, wherein the image to be processed comprises an adjacent progressive blurring region and a non-progressive blurring region; filtering the progressive blurring region based on a first filtering core, and filtering the non-progressive blurring region based on a second filtering core; the filtering weight of the internal numerical points in the first filtering kernel corresponding to the progressive blurring region is greater than the filtering weight of the internal numerical points in the second filtering kernel corresponding to the non-progressive blurring region, the number of the internal numerical points in the first filtering kernel is the same as that of the internal numerical points in the second filtering kernel, and the positions of the internal numerical points are also the same. Therefore, smooth transition among different regions of the image can be realized, bad experience brought by the filtering boundary to vision is relieved, and the visual effect is good.

Description

Image processing method and device
Copyright declaration
The disclosure of this patent document contains material which is subject to copyright protection. The copyright is owned by the copyright owner. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the patent and trademark office official records and records.
Technical Field
The present application relates to the field of image processing, and more particularly, to an image processing method and apparatus.
Background
In image transmission applications, captured images and videos are usually transmitted in real time, which requires a large bandwidth. In order to reduce the occupation of image transmission resources, the image can be blurred in a filtering manner. For example, the original values of the pixel points are maintained for the region of interest (ROI), and the high frequency information is reduced by mean filtering or gaussian filtering based on the same or different filtering radii for other regions. However, between the ROI region and the filtering region, and between the filtering regions using different filtering radii, there are distinct filtering boundaries, and the visual effect is poor.
Disclosure of Invention
The application provides an image processing method and device, which aim to relieve the boundary between filtering areas with different filtering radiuses, realize smooth filtering and improve the visual effect.
In a first aspect, an image processing method is provided. The method comprises the following steps: acquiring an image to be processed, wherein the image to be processed comprises an adjacent progressive blurring region and a non-progressive blurring region; filtering the progressive blurring region based on a first filtering core, and filtering the non-progressive blurring region based on a second filtering core; each of the first filtering kernel and the second filtering kernel comprises an internal value point and an external value point, and the external value point in each filtering kernel surrounds the internal value point; the filtering weight of the internal value points in the first filtering kernel is greater than that of the internal value points in the second filtering kernel, and the internal value points in the first filtering kernel and the internal value points in the second filtering kernel are the same in number and position.
In a second aspect, an image processing apparatus is provided for performing the method of the first aspect.
In a third aspect, an image processing apparatus is provided, which comprises a memory for storing instructions and a processor for executing the instructions stored in the memory, and execution of the instructions stored in the memory causes the system to perform the method of the first aspect.
In a fourth aspect, a chip is provided, where the chip includes a processing module and a communication interface, the processing module is configured to control the communication interface to communicate with the outside, and the processing module is further configured to implement the method of the first aspect.
In a fifth aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a computer, causes the computer to carry out the method of the first aspect. Specifically, the computer may be the image processing apparatus described above.
In a sixth aspect, there is provided a computer program product containing instructions which, when executed by a computer, cause the computer to carry out the method of the first aspect. Specifically, the computer may be the image processing apparatus described above.
Drawings
FIG. 1 is a schematic diagram of mean filtering;
FIG. 2 is a schematic diagram of weight filtering;
FIG. 3 is a schematic diagram of Gaussian filtering;
FIG. 4 is a schematic flow chart diagram of an image processing method provided by an embodiment of the present application;
FIG. 5 is a diagram illustrating external value points and internal value points of a filter kernel according to an embodiment of the present application;
fig. 6 is a schematic diagram of a first filtering kernel and a second filtering kernel provided in an embodiment of the present application;
fig. 7 is a schematic diagram of progressive blurring regions and non-progressive blurring regions in two blurring levels according to an embodiment of the present application;
FIG. 8 is a schematic diagram of at least one blurring layer in an image to be processed according to an embodiment of the present application;
FIG. 9 is a schematic diagram of at least one blurring layer and ROI in an image to be processed according to an embodiment of the present application;
fig. 10 is a schematic diagram of a width of a progressive blurring region and a distance between a pixel point to be processed in the progressive blurring region and an inner boundary of the progressive blurring region according to an embodiment of the present application;
FIG. 11 is a schematic diagram of various filtering modes provided by embodiments of the present application;
FIG. 12 is a diagram illustrating a mirror extension provided by an embodiment of the present application;
FIG. 13 is a schematic flow chart diagram of an image processing method provided by another embodiment of the present application;
fig. 14 is a schematic diagram of a progressive sharpening region, a non-progressive sharpening region, and an original value holding region in an image to be processed according to an embodiment of the present application;
FIG. 15 is a schematic illustration of sharpening provided by embodiments of the present application;
FIG. 16 is a schematic illustration of feathering as provided by embodiments of the present application;
FIG. 17 is another schematic diagram of a sharpening process provided by embodiments of the present application;
FIG. 18 is another schematic flow chart diagram of an image processing method provided by an embodiment of the present application;
fig. 19 is a schematic block diagram of an image processing apparatus provided in an embodiment of the present application;
fig. 20 is another schematic block diagram of an image processing apparatus provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
To facilitate understanding of the embodiments of the present application, first, the terms referred to in the present application will be briefly described below.
1. Image filtering: is a process of performing convolution processing on an input signal. If represented by a function: the filtered signal is convolved (input signal, convolution template). One representation of a convolution template is a filter kernel (or convolution kernel). Different convolution templates determine different filtering modes, and therefore, filtering modes such as high pass, low pass, band stop and the like are generated.
To achieve the goal of low-pass filtering, i.e., keeping the low-frequency part of the signal and reducing the high-frequency part, the input signal may be processed by using a mean convolution template (i.e., mean filtering), a gaussian convolution template (i.e., gaussian filtering), etc.
The concept of high-pass filtering corresponds to low-pass filtering, i.e. the high-frequency part of the signal is preserved and the low-frequency part is reduced. Since the present application is primarily concerned with low-pass filtering, the high-pass filtering will not be described in detail here.
In an embodiment of the present application, the image filtering may include spatial domain (spatial domain) filtering and frequency domain (frequency domain) filtering.
The spatial filtering is a neighborhood processing method. And performing neighborhood operation on the image in an image space by means of a template, wherein the value of each pixel point in the processed image is obtained by calculating the pixel value in the corresponding neighborhood of the pixel point according to the template.
In one implementation, the spatial filtering process may be embodied by the following formula, for example:
Figure BDA0002824559820000041
wherein w (i, j) represents a template; (x + i, y + j) represents a pixel in the neighborhood of the pixel (x, y), and g (x, y) represents the value of the pixel (x, y) after filtering; r is the filter radius, and r is a positive integer.
The processing procedure of the frequency domain filtering is as follows: and converting the image from the space domain to the frequency domain, filtering the image in the frequency domain by using a filtering function, and finally inversely converting the result to the space domain. Wherein the frequency domain may be a space defined by a Fourier Transform (Fourier Transform) and a frequency variable (u, v).
In one implementation, the process of frequency domain filtering may be embodied by the following formula, for example:
Figure BDA0002824559820000042
where F (u, v) represents the fourier transform of the original image F (x, y), H (u, v) represents the frequency domain filter function, and G (u, v) represents the fourier transform of the filtered image G (x, y).
It should be understood that the fourier transform, inverse fourier transform, and FFT and IFFT enumerated herein are merely examples for ease of understanding and should not constitute any limitation of the present application. The present application does not limit the specific manner in which the images are converted between spatial and frequency domains.
In the embodiment of the application, the processing of the image mainly comprises image blurring and image sharpening. The image blurring can be realized by spatial filtering, and the image sharpening can be realized by frequency-domain filtering.
2. Filter kernel (kernel) and filter radius: the filter kernel may also be referred to as a convolution kernel. The length and width of the filter kernel may be artificially defined, and the length x width may be referred to as the size of the filter kernel. For example, the usual sizes are 3 × 3, 5 × 5, 7 × 7, etc. The size of the filter kernel is determined by the filter radius. The filtering kernel is that the pixel point to be processed is used as the center, and the width of a filtering radius is respectively expanded to the upper direction, the lower direction, the left direction and the right direction of the pixel point to be processed. In short, assuming that the filter radius is r and the length and width of the filter kernel are k, k is 2r + 1. For example, if the filter radius is 1, the size of the filter kernel is 3 × 3; with a filter radius of 2, the size of the filter kernel is 5 × 5. For the sake of brevity, this is not to be enumerated here.
In the embodiment of the present application, the filter kernel may be understood as a numerical matrix composed of a plurality of filter weight values. When the filter kernel is used to process a certain pixel to be processed, each value in the value matrix corresponds to the pixel to be processed and pixels at different positions around the pixel (i.e., pixels in the neighborhood) respectively, so as to perform convolution (or multiplication). From the figure, the filter kernel comprises a plurality of points. For example a filter kernel of size 5 x 5 comprises 25 points. Each point in the filter kernel has a value, so each point can be referred to as a value point. Each value point may correspond to a pixel point and may be used to represent the filtering weight of the corresponding pixel point.
3. Image blurring (blu): in order to reduce image noise and reduce the level of detail, the image may be blurred. The blurred and smooth image is obtained after blurring treatment; and, the image size is also reduced. The image blurring may be achieved by mean filtering, gaussian filtering, or the like, for example.
4. Image sharpening (sharpen): in order to compensate the outline of the image, the edge and the gray jump part of the image are enhanced to make the image clear, and the image is sharpened. The image sharpening is to highlight the edge, contour or feature of some target elements of the ground object on the image. It can be understood that image sharpening corresponds to image blurring, and images obtained by different processing also have different visual effects.
In one implementation, high-frequency information of the image can be extracted through a high-pass filter and superimposed on the original image, so that the detail enhancement of the image is realized. In the field of images, edge detection operators are generally used for extracting edge information (i.e., details) of images, so as to improve image details. Common edge extraction operators include: sobel operator, Prewitt operator, Roberts operator, Laplacian operator, etc.
In another implementation, low-frequency information of the image can be extracted through a low-pass filter, and then the low-frequency information is subtracted from the original image to obtain high-frequency information, and the high-frequency information is superimposed on the original image to realize image enhancement.
In the embodiment of the present application, the second implementation manner described above is adopted for detail enhancement in the ROI region. It should be understood that this should not constitute any limitation to the present application.
5. And (3) mean filtering: is a linear filtering method. The mean filtering is to select a template from the pixels to be processed, wherein the template is composed of the pixels to be processed and a plurality of adjacent pixels, and the mean value of all the pixels in the template is used for replacing the value of the pixels to be processed. This template may be referred to as a mean convolution template.
If mean filtering is expressed by a formula, the value of the pixel point (x, y) to be processed after mean filtering can be obtained by the following formula:
Figure BDA0002824559820000051
wherein M represents the number of numerical points in the filtering kernel, and M is a positive integer; when the filtering kernel is used for filtering the pixel point (x, y), the pixel point (x, y) is positioned at the center of the filtering kernel; (x + i, y + j) represents a pixel in the neighborhood of pixel (x, y), and may also be understood as each pixel in the filter kernel.
For ease of understanding, the mean filtering is briefly described below in conjunction with fig. 1.
The left diagram in fig. 1 shows an example of an image to be processed. The image to be processed includes 5 × 5 pixel points. The black area of the image to be processed corresponds to a 3 × 3 filter kernel. I.e. the filter radius of the filter kernel is 1. And averaging the pixel points in the black area to obtain the value of the pixel point in the black area at the upper left corner in the right image. In other words, the value of the pixel point at the upper left corner in the right image is replaced by the mean value of the 9 pixel points at the upper left corner in the left image, that is, the mean value of the 9 pixel points in the filter kernel.
Moving the filtering kernel pixel by pixel, for example, sequentially from left to right, from top to bottom, or sequentially from top to bottom and from left to right, the filtered image shown in the right diagram of fig. 1 can be obtained. It can be seen that the filtered image includes 3 × 3 pixel points. The value of each pixel in the filtered image is replaced by the mean value of 9 pixels in the corresponding region in the image to be processed.
6. And (3) weight filtering: a linear filtering method. Unlike the mean filtering, the weight of each value point within the filtering kernel is configurable, so the weight of each value point may be different.
If the mean filtering is expressed by a formula, the weighted value of the pixel point (x, y) to be processed can be obtained by the following formula:
Figure BDA0002824559820000061
wherein, M, r and the meanings represented by the pixel points (x, y), (x + i, y + j) have already been explained in the mean filtering, and are not repeated here for the sake of brevity. Pixel point (i, j) represents each position in the filter kernel; w (i, j) represents the weight of the pixel point (x + i, y + j) in the filter kernel, w (i, j) > 0.
For ease of understanding, the weighted filtering is briefly described below in conjunction with fig. 2. Fig. 1 shows an example of weight filtering.
Fig. 2 shows a filtered image obtained by filtering an image to be processed, which includes 8 × 8 pixels, by a weight of a 3 × 3 filter kernel. I.e. the filter radius is 1. The weight of each value point in the filter kernel in the figure is shown in the figure respectively. And after the 9 pixel points in the black area at the upper left corner in the image to be processed are subjected to weight filtering based on the filtering kernel, the values of the pixel points in the black area at the upper left corner in the right image can be obtained. Unlike mean filtering, the weights of the value points within the filtering kernel may be different.
Moving the filtering kernel pixel by pixel in the image to be processed, for example, sequentially moving from left to right, from top to bottom, or sequentially moving from top to bottom and from left to right, so as to obtain the filtered image shown in the right diagram of fig. 1. It can be seen that the filtered image includes 3 × 3 pixel points. The value of each pixel in the filtered image is replaced by the weighting of 9 pixels in the corresponding region in the image to be processed.
7. Gaussian filtering: a linear filtering method. The gaussian filtering filters the image by using a distribution mode of a two-dimensional gaussian function. In the gaussian filtering process, a gaussian template is first determined and then convolved. This gaussian template may also be referred to as gaussian kernel, which may be understood as one of the above-mentioned filter kernels.
If the mean filtering is expressed by a formula, the weighted value of the pixel point (x, y) to be processed can be obtained by the following formula:
Figure BDA0002824559820000071
wherein σ is a standard deviation of the gaussian distribution, and a value thereof can be preset. In the embodiment of the present application, the standard deviation σ is referred to as a filter parameter.
For ease of understanding, the gaussian filtering is briefly described below in conjunction with fig. 3. The size of the gaussian kernel shown in fig. 3 is 3 x 3. Assuming that the coordinates of the center point are (0, 0), the coordinates of the 8 numerical points adjacent thereto are (-1, 1), (0, 1), (1, 1), (-1, 0), (1, 0), (-1, -1), (0, -1), and (1, -1), respectively. In order to determine the weight of each value point in the gaussian kernel, the value of σ needs to be set. Assuming that σ is 1.5, the weight of the central point and the weights of the neighboring numerical points in the gaussian kernel with the filter radius of 1 can be obtained as shown in the figure. And further carrying out normalization processing on the weight of each numerical value point to obtain the weight of each numerical value point in the Gaussian kernel.
It should be understood that the values of the various filtering modes, filtering kernels and various parameters mentioned above in connection with the drawings are only examples, and should not limit the application in any way.
8. High frequency information and low frequency information: the areas in the image where the color changes slowly, i.e. the grey scale changes slowly, correspond to low frequencies. The main component of the image is low frequency information, which forms the basic grey scale of the image and has little decisive effect on the image structure. The grey values of the image edges change sharply, corresponding to high frequencies. The high frequency information forms the edges and details of the image. The mid-frequency information determines the basic structure of the image, forming the main edge structure of the image. Thus, the high frequency information may be understood as a further enhancement of the image content over the intermediate frequency information.
In the embodiment of the application, in order to distinguish different information, high-frequency information, low-frequency information, intermediate-frequency information and medium-low-frequency information are introduced, and different names of the information are only used for distinguishing signals obtained through different processing. Specifically, if different filter radii are used to filter the image to be processed, the obtained information is different. For example, when the average filtering is performed by using a 5 × 5 filtering kernel, the detail of the former is more retained than that of the latter compared with the average filtering by using a 7 × 7 filtering kernel, and for the purpose of distinction, the information obtained by the former filtering is referred to as middle and low frequency information, and the information obtained by the latter filtering is referred to as low frequency information. The low-and-medium frequency information can be understood as including low-frequency information and medium-frequency information, and if the low-and-medium frequency information obtained after 5 × 5 filtering kernel mean filtering is different from the low-frequency information obtained after 7 × 7 filtering kernel mean filtering, the obtained information can be considered as medium-frequency information. The difference between the original image and the low-and-medium frequency information is obtained, and the obtained information can be regarded as high-frequency information.
The image processing method mainly comprises image blurring processing and image sharpening processing. The image processing method provided by the present application will be described in detail below with reference to the accompanying drawings. The method embodiment shown in fig. 4 mainly describes a process of blurring an image. The method embodiment shown in fig. 13 mainly describes the process of blurring and sharpening an image.
It should be understood that the method embodiments shown in fig. 4 and 13 may be performed by an image processing apparatus or a component (e.g., a chip, etc.) configured in the image processing apparatus. This is not a limitation of the present application. Hereinafter, for convenience of understanding and explanation, the embodiments of the present application will be described in detail with reference to an image processing apparatus as an execution subject.
The image processing apparatus may be configured in, for example, an unmanned device such as an unmanned aerial vehicle, an unmanned automobile, or the like. The drone may include one or more cameras that may be used to capture images. The captured image may be transmitted to a user end, such as a console, after being blurred by the image processing apparatus.
It should be understood that the application scenarios described above are only examples and should not constitute any limitation of the present application. The image processing method provided by the present application is not limited to the above application scenario.
The method 400 shown in fig. 4 and the method 500 shown in fig. 13 are described in detail below, respectively.
Fig. 4 is a schematic flowchart of an image processing method 400 provided in an embodiment of the present application. The method 400 shown in fig. 4 includes step 410 and step 420. The steps of method 400 are described in detail below.
In step 410, an image to be processed is acquired.
In particular, the image to be processed may be a digital image. For example, an image captured by a camera is converted into an electrical signal by a photosensitive element, and then the electrical signal is transmitted to an Image Signal Processor (ISP) for processing, so that the image is converted into a digital image. The digital image may further be sent to an image processing device. The digital image sent to the image processing device is the image to be processed.
It should be understood that the image to be processed described herein may be, for example, one frame of image acquired by a camera through photographing, or one frame of multiple frames of images acquired through video recording, which is not limited in this application.
In the embodiment of the present application, the image to be processed may include adjacent progressive blurring regions and non-progressive blurring regions. For example, the progressive blurring region is surrounded by the non-progressive blurring region; for another example, the progressive blurring region and the non-progressive blurring region are two adjacent stripe regions, and so on. This is not a limitation of the present application.
In one possible design, the image to be processed may include at least one blurring level. The at least one blurring layer may be divided before the image to be processed is transmitted to the image processing apparatus, or may be divided by the image processing apparatus itself. This is not a limitation of the present application.
Optionally, the method further comprises: and dividing the image to be processed into at least one blurring layer. The at least one blurring level is arranged consecutively. For example, the at least one virtualization layer may be arranged in order from inside to outside, with one virtualization layer surrounding another virtualization layer. For another example, the at least one blurring layer may be arranged sequentially from top to bottom or from left to right, with one blurring layer next to another blurring layer. This is not a limitation of the present application.
Further, each blurring level may include a progressive blurring region and a non-progressive blurring region adjacent to each other. That is, each virtualization layer can be further divided into two adjacent blocks. Based on the difference of the filtering operation on the two blocks of regions, the two blocks of regions can be respectively referred to as progressive blurring region and non-progressive blurring region. The detailed process of the filtering operation will be described in detail later, and the detailed description of the filtering operation will be omitted here for the moment.
In step 420, the progressive blurring region is filtered based on the first filter kernel, and the non-progressive blurring region is filtered based on the second filter kernel.
For convenience of distinction and explanation, in the embodiment of the present application, a filter kernel corresponding to a progressive blurring region is denoted as a first filter kernel, and a filter kernel corresponding to a non-progressive blurring region is denoted as a second filter kernel. The first filter kernel and the second filter kernel may be the same size or different sizes. Or, the filtering radius corresponding to the progressive blurring region and the filtering radius corresponding to the non-progressive blurring region may be the same or different. This is not a limitation of the present application.
Each of the first and second filter kernels may include an inner value point and an outer value point, and the outer value point in each filter kernel surrounds the inner value point; the filtering weight of the internal value points in the first filtering kernel is greater than that of the internal value points in the second filtering kernel, and the internal value points in the first filtering kernel and the internal value points in the second filtering kernel are the same in number and position.
In the embodiment of the present application, the first filter kernel or the second filter kernel may be understood as a numerical matrix composed of a plurality of filter weight values. When the filter kernel is used to process a certain pixel to be processed, each value point in the value matrix corresponds to the pixel to be processed and pixels at different positions around the pixel (i.e., pixels in the neighborhood) respectively, so as to perform convolution (or multiplication).
In step 420, the image processing apparatus may perform filtering processing on each pixel to be processed according to the filtering weight of each numerical value point in the filtering kernel corresponding to the region to which each pixel to be processed belongs. Therefore, the image processing apparatus can determine the filter weight of each numerical point in the filter kernel corresponding to each region in advance.
In the embodiment of the present application, the value point in each of the first filter kernel and the second filter kernel may be divided into two parts: an inner value point and an outer value point. In other words, each filter kernel may include an inner value point and an outer value point. The outer value points in each filter kernel surround the inner value points. Fig. 5 shows an example of a filter kernel. The filter kernel shown in fig. 5 is a filter kernel with a filter radius of 2. That is, the filter kernel is a 5 × 5 filter kernel. The middle 9 numerical value points shown by dark shading in the filtering kernel are internal numerical value points, and the outer 16 numerical value points shown by light shading in the outer ring of the filtering kernel are external numerical value points. It should be understood that fig. 5 is merely an example for ease of understanding and should not constitute any limitation on the present application. The method and the device have no limitation on the size of the filtering radius of the filtering kernel and the number of the external numerical points and the internal numerical points.
In the embodiment of the present application, the number of the internal value points in the first filtering kernel and the number of the internal value points in the second filtering kernel are the same, and the positions of the internal value points are the same.
Here, the positions of the internal value points in the first filter kernel and the internal value points in the second filter kernel are the same, and specifically, the directions and distances of the internal value points in the first filter kernel with respect to a reference may be the same as the directions and distances of the internal value points in the second filter kernel with respect to the same reference.
For example, the reference may be the center of the filter kernel. The center of the filter kernel may particularly refer to a numerical point located at the center of the filter kernel. For the sake of distinction and explanation, a value point located at the center of the first filter kernel is referred to as a first value point, and a value point located at the center of the second filter kernel is referred to as a second value point. The distance between the internal value point in the first filter kernel and the first value point is the same as the distance between the internal value point in the second filter kernel and the second value point, and the direction of the internal value point in the first filter kernel relative to the first value point is the same as the direction of the internal value point in the second filter kernel relative to the second value point.
If a reference is taken from the internal value point of the first filter kernel and the internal value point of the second filter kernel, for example, the value point located at the center of the internal value point of the first filter kernel and the value point located at the center of the internal value point of the second filter kernel. For the sake of distinction and explanation, a value point located at the center of an internal value point of the first filter kernel is referred to as a third value point, and a value point located at the center of an internal value point of the second filter kernel is referred to as a fourth value point. The distance between the third value point in the first filter kernel and the first value point is the same as the distance between the fourth value point in the second filter kernel and the first value point, and the direction of the third value point in the first filter kernel and the direction of the fourth value point in the second filter kernel are the same.
It should be understood that the above is only for ease of understanding, and the relationship of the internal numerical value point of the first filter kernel and the relationship of the internal numerical value point of the second filter kernel and the second filter kernel are described with reference to the numerical value point located at the center of the internal numerical value point in the first filter kernel and the numerical value point located at the center of the internal numerical value point in the second filter kernel, respectively. This should not be construed as limiting the application in any way. For example, a line connecting any one or a plurality of internal numerical value points may be used as a reference. For the sake of brevity, this is not illustrated individually.
In short, if the centers of the first filter kernel and the second filter kernel are overlapped, the internal numerical point in the first filter kernel and the internal numerical point in the second filter kernel may constitute a completely overlapped region.
For ease of understanding, the internal value points in the first filter kernel and the internal value points in the second filter kernel are described below with reference to the drawings. Fig. 6 is a schematic diagram of a first filtering kernel and a second filtering kernel provided in an embodiment of the present application. As shown, a) in fig. 6 shows a first filter kernel and a second filter kernel, and shows a first numerical value point located at the center of the first filter kernel and a second numerical value point located at the center of the second filter kernel, respectively. B) and c) in fig. 6 show two examples of internal value points in the first filter kernel (indicated by hatching in the figure for ease of distinction) and internal value points in the second filter kernel (indicated by hatching in the figure for ease of distinction), respectively. It can be seen that the two shaded areas contain the same number of value points, and the distance and direction of the third value point in the first filter kernel relative to the first value point are the same as the distance and direction of the fourth value point in the second filter kernel relative to the second value point. As shown in b), the third value point in the first filter kernel is located to the left of and adjacent to the first value point, and the fourth value point in the second filter kernel is located to the left of and adjacent to the second value point. As shown in c), the third value point in the first filter kernel coincides with the first value point, and the fourth value point in the second filter kernel coincides with the second value point.
It should be understood that the figures are for illustration purposes only and that two filter kernels of different sizes are shown. This should not be construed as limiting the application in any way. The first filter kernel and the second filter kernel may be filter kernels having the same size, or the first filter kernel may be larger in size than the second filter kernel, which is not limited in this application.
In the embodiment of the present application, the filtering weight of the internal value point in the first filtering kernel is greater than the filtering weight of the internal value point in the second filtering kernel.
Here, the filtering weight of the internal numerical point in the first filtering kernel may specifically be a sum of filtering weights of the internal numerical points when the progressive blurring region of the image to be processed is filtered based on the first filtering kernel. Correspondingly, the filtering weight of the internal numerical point in the second filtering kernel may specifically be a sum of filtering weights of the internal numerical points when the non-progressive blurring region of the image to be processed is filtered based on the second filtering kernel.
As an example, in the same blurring level, the filtering weight of the internal value point in the first filtering kernel is w1The filtering weight of the internal value point in the second filtering kernel is w2,w1=(1+w2) 2; wherein, w1And w2Are all positive numbers.
It should be understood that the magnitudes and relationships of the filter weights of the internal value points in the first filter kernel and the filter weights of the internal value points in the second filter kernel are only examples, and should not limit the present application in any way. A mathematical transformation or equivalent substitution of the relation of the filter weights of the internal value points of the first filter kernel and the filter weights of the internal value points of the second filter kernel can be made by those skilled in the art based on the same concept. It is within the scope of the present application to obtain the relationship between the filter weights of the internal value points of the first filter kernel and the filter weights of the internal value points of the second filter kernel by the mathematical transformation or equivalent replacement, so long as the relationship between the filter weights of the internal value points of the first filter kernel and the filter weights of the internal value points of the second filter kernel satisfies that the filter weights of the internal value points of the first filter kernel are greater than the filter weights of the internal value points of the second filter kernel.
Optionally, in the same blurring layer, the filtering radius of the first filtering kernel is the same as the filtering radius of the second filtering kernel.
That is, the non-progressive blurring region and the progressive blurring region in the same blurring layer may perform the filtering operation based on the same filtering radius. In other words, each blurring level may correspond to a filter radius. Or, the size of the filter kernel of the non-progressive blurring region in the same blurring layer is the same as the size of the filter kernel of the progressive blurring region.
Optionally, in the same blurring hierarchy, the center of the internal value point in the first filtering kernel coincides with the center of the first filtering kernel, and the center of the internal value point in the second filtering kernel coincides with the center of the second filtering kernel.
In other words, the first value point in the first filter kernel coincides with the third value point, and the second value point in the second filter kernel coincides with the fourth value point. I.e. as shown in c) in fig. 6.
The following is described with reference to specific examples. It is assumed that in the same virtualization layer, the progressive virtualization region includes a filter kernel of size 5 × 5, and the non-progressive virtualization region includes a filter kernel of size 5 × 5. That is, the first filter kernel and the second filter kernel may each include 25 numerical points. The filtering radius of the first filtering kernel and the filtering radius of the second filtering kernel are both 2. The internal value points in the first filter kernel may, for example, occupy a region of 3 × 3 in the center of the first filter kernel, i.e., the number of internal value points is 9. Correspondingly, the internal value points in the second filter kernel may occupy a region 3 × 3 of the center of the second filter kernel, for example, and the number of the internal value points is also 9.
If the non-progressive blurring region is subjected to mean filtering, the filtering weight of each value point in the second filtering kernel is the same. For example, in a 5 × 5 filter kernel, the filter weight for each value point is 4%, and the filter weight w for the internal value points in the second filter kernel2It is 36%. If based on w1=(1+w2) 2 to determine the filter weights w of the internal value points in the first filter kernel1Then w is1Is (1+ 36%)/2 ═ 68%. And the filtering weight of the external numerical point in the first filtering kernel is only 1-68% ═ 32%.
It can be seen that the filter weights of the inner value points of the first filter kernel are higher than the filter weights of the outer value points. I.e. the higher weights are replayed at the internal value points in the first filter kernel.
In one possible design, the number of internal value points in the first filter kernel may be the same as the number of value points in the filter kernel of the non-progressive blurring region in the previous blurring level. This is as if the outer value points were added on the basis of the filter kernel of the previous blurring level, and the weights for the outer value points were increased slowly, as in the present embodiment, a lower weight is applied to progressive blurring regions and a slightly higher weight is applied to non-progressive blurring regions, so that a smooth transition from the previous blurring level to the next blurring level is possible.
For example, the number of internal value points in the filter kernel in the progressive blurring region in the 2 nd blurring layer is the same as the number of value points in the filter kernel in the non-progressive blurring region in the 1 st blurring layer; the number of the internal numerical points in the filter kernel in the progressive blurring region in the 3 rd blurring layer is the same as the number of the numerical points in the filter kernel in the non-progressive blurring region in the 2 nd blurring layer; the number of the internal value points in the filter kernel in the progressive blurring region in the 4 th blurring layer is the same as the number of the value points in the filter kernel in the non-progressive blurring region in the 3 rd blurring layer.
Fig. 7 shows progressive and non-progressive blurring regions in different levels of blurring. As shown in the figure, the size of the region occupied by the 9 internal value points in the filter kernel of the progressive blurring region in the subsequent blurring layer is 3 × 3, which is exactly the same as the size of the filter kernel of the non-progressive blurring region in the previous blurring layer. The 9 internal value points may be given higher filtering weights, as listed above by 68%, illustrated in the figure by dark shading. While the outer value points in the filter kernel of the progressive blurring region in the latter blurring layer may apply lower filtering weights, as listed above at 32%, illustrated in the figure with light shading. That is, the filtering process may be performed in a weighted filtering manner in the progressive blurring region.
It is understood that the area shown by light shading in the figure is a part of the area of the filter kernel in the subsequent blurring layer which is added to the filter kernel in the previous blurring layer. This newly added area is the area occupied by the external numerical point in the later blurring layer as described above. The value points in the newly added region can be applied with lower filtering weight first to slowly transit to the non-progressive blurring region in the subsequent blurring layer.
The filtering weights of the value points in the filtering kernel of the non-progressive blurring region in the latter blurring layer may be the same. That is, the filtering process may be performed in a non-progressive blurring region by means of mean filtering. The filtering weight of the inner value points may be 36% as listed above, and the filtering weight of the outer value points may be 1-36% ═ 64%, for example. That is, the filtering weight for each value point is 4%.
Therefore, when filtering the pixel point to be processed in the subsequent virtualization layer, the higher weight can be firstly applied to the internal numerical point in the progressive virtualization region, and the lower weight can be applied to the external numerical point, so that the virtualization degree can be improved along with the increase of the filtering radius in the process of transition from the filtering kernel of the previous virtualization layer to the filtering kernel of the subsequent virtualization layer; meanwhile, in the process of transition from the previous virtualization layer to the next virtualization layer, along with the increase of the filter kernel, the filter weight of the newly added external numerical value points in the filter kernel can be slowly increased, and smooth transition is realized.
It should be understood that fig. 7 is merely an example for ease of understanding and should not constitute any limitation on the present application. The internal numerical point and the external numerical point in the first filtering kernel and the corresponding filtering weight are not limited.
As previously mentioned, the image to be processed may include at least one blurring level. When the blurring hierarchy is multiple, a progressive blurring region is provided between every two non-progressive blurring regions. That is, progressive and non-progressive blurring regions are spaced apart in an "ABAB" manner.
As such, starting from the 2 nd virtualization level, each progressive virtualization region can be a region located between a non-progressive virtualization region in the virtualization level to which it belongs and a non-progressive virtualization region in the previous virtualization level.
For example, the progressive virtualization region in the 2 nd virtualization layer may be a region between the non-progressive virtualization region in the 2 nd virtualization layer and the non-progressive virtualization region in the 1 st virtualization layer, the progressive virtualization region in the 3 rd virtualization layer may be a region between the non-progressive virtualization region in the 3 rd virtualization layer and the non-progressive virtualization region in the 2 nd virtualization layer, and the progressive virtualization region in the 4 th virtualization layer may be a region between the non-progressive virtualization region in the 4 th virtualization layer and the non-progressive virtualization region in the 3 rd virtualization layer. The non-progressive blurring region in the 1 st blurring level may be a region between the progressive blurring region in the 1 st blurring level and the progressive blurring region in the 2 nd blurring level.
Fig. 8 shows an example of at least one blurring level in an image to be processed. As shown in a) of fig. 8, 4 virtualization layers are arranged in sequence from inside to outside. Optionally, the image to be processed comprises a ROI. Because the ROI is a region of interest of the user, the pixel points in the ROI may not be blurred, for example, the original values of the pixel points may be maintained, or sharpening may be performed. The above 4 virtualization layers may be sequentially arranged in an order from inside to outside with the ROI as a center, and the filtering radius is gradually increased in the order from inside to outside to perform filtering processing on the pixel points in each virtualization layer.
To illustrate the various levels of blurring and regions more clearly, b) in fig. 8 distinguishes different levels of blurring by solid lines and distinguishes non-progressive and progressive blurring regions in the same level of blurring by dashed lines. Wherein, the non-progressive blurring region is not filled with the shadow, and the progressive blurring region is filled with the shadow.
It can be seen that there is a non-progressive blurring region between every two progressive blurring regions. With ROI as the center, sequentially from inside to outside: ROI, progressive blurring region in the 1 st blurring layer, non-progressive blurring region in the 1 st blurring layer, progressive blurring region in the 2 nd blurring layer, non-progressive blurring region in the 2 nd blurring layer, progressive blurring region in the 3 rd blurring layer, non-progressive blurring region in the 3 rd blurring layer, progressive blurring region in the 4 th progressive blurring region, non-progressive blurring region in the 4 th blurring layer. In other words, the region immediately adjacent to the ROI is a progressive blurring region in the 1 st blurring level. The progressive blurring region in the 1 st blurring layer surrounds the ROI, and the non-progressive blurring region in the 1 st blurring layer surrounds the progressive blurring region in the 1 st blurring layer; the progressive virtualization region in the 2 nd virtualization layer surrounds the non-progressive virtualization region in the 1 st virtualization layer, and the non-progressive virtualization region in the 2 nd virtualization layer surrounds the progressive virtualization region in the 2 nd virtualization layer; the progressive virtualization region in the 3 rd virtualization layer surrounds the non-progressive virtualization region in the 2 nd virtualization layer, and the non-progressive virtualization region in the 3 rd virtualization layer surrounds the progressive virtualization region in the 3 rd virtualization layer; the progressive blurring region in the 4 th blurring layer surrounds the non-progressive blurring region in the 3 rd blurring layer, and the non-progressive blurring region in the 4 th blurring layer surrounds the progressive blurring region in the 4 th blurring layer.
The boundary of the ROI and the boundaries of each blurring level shown in fig. 8 are both lines perpendicular to the frame of the image to be processed. Since the human eye is sensitive to vertical lines, the boundary of the ROI and the boundaries of each blurred layer can be processed. For example, radians may be added to the boundary of the ROI and the boundary of each blurring level. Optionally, the boundary of the ROI comprises a line that is not parallel and/or perpendicular to the horizontal boundary of the image to be processed. Optionally, the boundary of each blurring level comprises a line that is not parallel and/or perpendicular to the horizontal boundary of the image to be processed.
Fig. 9 is a schematic diagram of at least one blurring layer and ROI in an image to be processed according to an embodiment of the present application. Fig. 9 shows an example of processing two boundaries of the ROI and two boundaries of each blurring level. As shown, the ROI comprises two lines perpendicular to the horizontal boundary of the image to be processed, i.e. two vertical boundaries of the ROI. Adding a curvature to the two perpendicular boundaries, for example using a quadratic curve to achieve the curvature, may result in an approximately elliptical ROI as shown in fig. 9. The boundary of each blurring layer also comprises two lines perpendicular to the horizontal boundary of the image to be processed, namely the vertical boundary of each blurring layer. Adding radians to the vertical boundaries of each level of blurring may also result in a plurality of levels of blurring that approximate an ellipse as shown in fig. 9.
Of course, radians may also be added to the two horizontal boundaries of the ROI and the two horizontal boundaries of each blurring level. Alternatively, radians are added to the horizontal and vertical boundaries of the ROI and the horizontal and vertical boundaries of each blurring level. This is not a limitation of the present application. For the sake of brevity, one of the figures is not illustrated here.
Therefore, by adding radians to the vertical boundary and/or the horizontal boundary, the elimination effect of the boundary between the virtual layers and the boundary between the ROI and the virtual layer can be further enhanced, so that the transition is smoother and natural, and the visual effect is further improved.
It should be understood that the above detailed description of the image to be processed including one ROI and 4 levels of blurring is described in conjunction with the accompanying drawings only for ease of understanding. In fact, the present application does not limit the number of ROIs and the number of levels of blurring. The image to be processed may include at least one ROI and at least one blurring layer. The number of ROIs is not necessarily equal to or multiplied by the number of progressive blurring regions. The number and position of the ROI and progressive blurring region depend on the region division of the image to be processed, which is not limited in the present application.
For example, the at least one ROI may be at least one region in an upper left corner, an upper right corner, a lower left corner, or a lower right corner of the image to be processed. Each ROI may be adjacent to at least one progressive blurring region. For example, each ROI region may be surrounded by at least one progressive blurring region. The at least one progressive blurring region surrounding each ROI may also be a single progressive blurring region in the middle region of the image to be processed, or the at least one progressive blurring region surrounding each ROI may be independent of each other. This is not a limitation of the present application.
For another example, the image to be processed may be a long frame, and the at least one ROI may be, for example, one or more segments in the middle of the image to be processed. Both sides of each ROI may be adjacent to at least one progressive blurring region.
Also for example, the image to be processed includes an ROI, which is located in a middle region of the image to be processed. The ROI may be adjacent to, e.g., surrounded by, at least one progressive blurring region.
Further, the non-progressive blurring regions in different blurring levels may correspond to different filtering radii. For example, the filter radius of each non-progressive blurring region may increase as the blurring level increases. For example, the image to be processed may be divided into 4 blurring levels as shown in fig. 8 and 9. The filtering radius of the non-progressive blurring region in the 1 st blurring layer is 1, the filtering radius of the non-progressive blurring region in the 2 nd blurring layer is 2, the filtering radius of the non-progressive blurring region in the 3 rd blurring layer is 3, and the filtering radius of the non-progressive blurring region in the 4 th blurring layer is 4. As another example, the image to be processed may be divided into 3 levels of blurring. The filtering radius of the non-progressive blurring region in the 1 st blurring layer is 1, the filtering radius of the non-progressive blurring region in the 2 nd blurring layer is 3, and the filtering radius of the non-progressive blurring region in the 3 rd blurring layer is 5. And so on, this is not to be enumerated here.
Optionally, in the at least one blurring layer, the filtering radius corresponding to the non-progressive blurring region is a continuous positive integer.
Since blurring of the image becomes more and more severe as the filtering radius increases. Therefore, as the blurring level increases, blurring of the image becomes more and more severe. If the filtering radius corresponding to two adjacent virtualization layers is a continuous positive integer, the transition between the two virtualization layers is smooth and natural.
Based on the technical scheme, the progressive blurring region is added between the two adjacent non-progressive blurring regions, a part of pixel points in the progressive blurring region are weighted more, and the other part of pixel points are weighted less, so that the two non-progressive blurring regions can be transited slowly through the non-progressive blurring region, the filtering boundary is more fuzzy, bad experience of the filtering boundary between the blurring layers to vision can be relieved, and a better visual effect is obtained. Correspondingly, if progressive blurring regions are not allocated to each blurring layer, the regions in each blurring layer are non-progressive blurring regions. As described above, the filtering radii of the blurring layers are different. Then, after filtering processing is performed on the non-progressive blurring regions in different blurring levels based on different filtering radii, a relatively obvious filtering boundary exists between the non-progressive blurring regions, and the visual effect is poor.
In addition, as the blurring level from the ROI to the outside is increased, the filtering radius is increased, and the blurring effect on the image is more and more obvious. Thereby, a blurring effect of smooth transition from the ROI to the outside can be achieved.
It should be understood that the way in which the blurring of the image to be processed is not limited to only mean filter kernel weight filtering. Several other possible blurring processes are listed below.
In another implementation, the blurring processing for the progressive blurring region may also be determined according to the position of each pixel point to be processed in the progressive blurring region. Since the filtering weight is determined according to the position of each pixel point to be processed, the progressive blurring region can be blurred more finely than the blurring process described above.
For example, in the filtering kernel corresponding to the progressive blurring region, the filtering weight of each value point is related to the distance between the pixel point to be processed and the inner boundary of the progressive blurring region. Here, the inner boundary of the progressive blurring region is closer to the non-progressive blurring region in the previous blurring layer than the outer boundary. The inner boundary of the progressive blurring region may be, for example, an edge that intersects a non-progressive blurring region in a previous blurring level.
Specifically, when the pixel point to be processed in the progressive blurring region is filtered by using the filter kernel corresponding to the progressive blurring region, the weight of each numerical value point in the filter kernel changes with the change of the distance between the pixel point to be processed and the inner boundary of the progressive blurring region.
Since the larger the filtering radius, the better the blurring effect, i.e. the more blurred the filtered image. When filtering the progressive blurring region, it may be considered that the detail information of the image is retained to a greater extent by the to-be-processed pixel close to the inner boundary thereof, and the to-be-processed pixel far from the inner boundary (or close to the outer boundary thereof) is blurred to a greater extent. In order to obtain the above effect, the filter weights of the inner value points and the outer value points in the first filter kernel corresponding to the progressive blurring region may be designed. The internal numerical points and the external numerical points are already described in detail with reference to fig. 5, and are not repeated here for brevity.
In this implementation manner, the to-be-processed pixel points in the non-progressive blurring region in the to-be-processed image can still be blurred in a mean filtering manner, and the to-be-processed pixel points in the progressive blurring region can be blurred in a weight filtering manner.
When the first filtering kernel is used for filtering a certain pixel point in the progressive blurring region, the filtering weight of an external numerical point in the first filtering kernel is positively correlated with the distance between the pixel point and the inner boundary of the progressive blurring region. For the convenience of distinction and explanation, the to-be-processed pixel point in the progressive blurring region is referred to as a first pixel point, and the first pixel point may be any one to-be-processed pixel point in the progressive blurring region. In other words, the filtering weight of the outer value points increases as the distance between the first pixel point and the inner boundary of the progressive blurring region increases. Or, the closer the first pixel point is to the inner boundary of the progressive blurring region, the smaller the filtering weight of the external numerical value point is; the closer the first pixel point is to the outer boundary of the progressive blurring region, the larger the filtering weight of the external numerical value point is. Therefore, as the first pixel point is more and more away from the inner boundary of the progressive blurring region, or is more and more close to the outer boundary of the progressive blurring region, the filtering weights of the internal numerical points and the external numerical points tend to be more evenly distributed.
Illustratively, the filtering weight of the external numerical point may be determined by
Figure BDA0002824559820000181
And (4) determining. In the formula DblurRepresents the distance between the first pixel point and the inner boundary of the progressive blurring region, BblurRepresenting the width of the progressive blurring region and W representing the predefined filtering weight.
FIG. 10 shows BblurAnd DblurAn example of the method. The black rectangle in the figure represents the first pixel. It should be understood that the figures are merely examples, and the first pixel points are shown as rectangles. In fact, the size of the pixel point is very small and can be ignored. Note that the progressive blurring region shown in fig. 10 may be a progressive blurring region in any one of the at least one blurring layer, and therefore the progressive blurring region may surround a previous blurring layer or may surround the ROI.
Optionally, the image to be processed further includes at least one ROI, each ROI is surrounded by at least one blurring layer, and the at least one blurring layer surrounding one ROI is distributed sequentially from inside to outside. Wherein, in each virtualization layer, the non-progressive virtualization region surrounds the progressive virtualization region.
As shown in fig. 10, the distance between the inner boundary and the outer boundary of the progressive blurring region is the width B of the progressive blurring regionblur. It should be understood that the widths of the progressive blurring regions may be different or the same in different directions (such as the x direction and the y direction in the figure), which is not limited in the present application. It should be noted, however, that if the widths of the progressive blurring regions are different in different directions, the D on which the filtering weights for determining the extrinsic value points are based isblurMay also be different, depending on the position of the pixel points to be processed in the progressive blurring region.
As can be seen from the above equation, the filtering weight of the outer value point is proportional to the distance between the first pixel point and the inner boundary of the progressive blurring region.
The relation between the filtering weight of the external numerical value point in the first filtering kernel and the distance between the first pixel point and the inner boundary of the progressive blurring region can be determined, and the filtered value of the first pixel point is determined by at least the following parameters:
1) when the first filtering kernel is used for filtering the first pixel point, the value x of each pixel point which is overlapped with the external numerical value point in the first filtering kernel in the progressive blurring regionoAnd the value x of each pixel point in the progressive blurring region coinciding with the internal value point in the first filter kerneli
2) The number N of external numerical points in the first filter kerneloAnd the number N of internal numerical pointsi
3) Distance D between the first pixel point and the inner boundary of the progressive blurring regionblur(ii) a And
4) a predefined filtering weight.
For the sake of understanding, reference is made to the accompanying drawings. The filter kernel shown in fig. 5 is assumed to be the first filter kernel. When the first filtering kernel is used for filtering the first pixel pointThe number of pixel points in the progressive blurring region coinciding with the outer value points of the first filter kernel is 16, i.e., NoIs 16; further, the value x of each pixel point in the 16 pixel points can be obtainedo. Adding the values of the 16 pixel points to obtain the sum sigma x of the values of the pixel points superposed with the external numerical value point of the first filtering kernel in the progressive blurring regiono. Similarly, the number of pixel points in the progressive blurring region that coincide with the internal value points of the first filter kernel is 9, i.e., NiIs 9; further, the value x of each pixel point in the 9 pixel points can be obtainedi. Adding the values of the 9 pixel points to obtain the sum sigma x of the values of the pixel points superposed with the internal numerical point of the first filtering kernel in the progressive blurring regioni
Further, the filtered value x of the first pixel point can be represented by a formula
Figure BDA0002824559820000191
And (4) determining. The definitions of the various parameters have been described in detail above in connection with the accompanying drawings and will not be repeated here for the sake of brevity.
The internal numerical point is similar to the filtering processing of the pixel point to be processed by adopting a filtering kernel smaller than the first filtering kernel. The smaller the filter radius, the less noticeable the blurring effect of the image. If a larger filtering weight is applied to the internal numerical point, the detail information of the image is retained to a larger extent. On the contrary, as the filtering weight of the external numerical value point is increased, the blurring effect on the image becomes more and more obvious.
According to the embodiment of the application, the weight applied to the external numerical point is increased along with the increase of the distance between the pixel point to be processed and the inner boundary of the progressive blurring region, namely, the region far away from the inner boundary of the progressive blurring region or the position close to the non-progressive blurring region, so that a good smoothing effect is realized, and the image blurring is obvious; in the area close to the inner boundary of the progressive blurring area, or in the position far away from the non-progressive blurring area, the blurring degree of the image is smaller, and the detail information of the image is retained to a greater degree. Thereby a smooth transition is achieved.
It should be understood that the above formula is only one possible implementation and should not be construed as limiting the present application in any way. For example, the filtering weight of the external numerical point can also be determined by
Figure BDA0002824559820000201
And (4) determining. Wherein alpha is a coefficient, and alpha is more than 0.
It should be further understood that the specific process of blurring the to-be-processed pixel points in the non-progressive blurring region by means of the mean filtering is similar to that described above, and is not described herein again. It should also be understood that the above descriptions regarding the blurring of the non-progressive blurring region, the blurring hierarchy and the corresponding filter kernel can be used in conjunction with the present embodiment. The above blurring approach for progressive blurring regions may also be used in conjunction with the present embodiment. For example, a finer blurring mode is used in a blurring hierarchy close to the ROI, and the filtering weight of each numerical point in the first filtering kernel is changed along with the position of the pixel point to be processed; in the blurring layer far from the ROI, a coarser blurring manner is used, and the filtering weight of each value point in the first filtering kernel may be fixed.
In yet another implementation, the blurring process may be performed by using gaussian filtering.
In gaussian filtering, the filter parameter σ represents the degree of dispersion of the data. The smaller σ, the larger the central coefficient of the generated gaussian template, and the smaller the surrounding coefficients. In other words, the smoothing effect on the image is not very noticeable. The larger σ is, the smaller the central coefficient of the generated gaussian template is, and the larger the surrounding coefficients are. In other words, the smoothing effect on the image is more obvious and closer to that of the mean filtering.
Therefore, different values of the filter parameter σ can be adopted for the progressive blurring region and the non-progressive blurring region in the same blurring layer. Specifically, for progressive blurring regions in the same blurring layer, a filter parameter σ with a smaller value can be adopted to apply a higher filter weight to part of numerical points in a filter kernel, and to apply a lower filter weight to newly added numerical points in the filter kernel relative to a filter kernel corresponding to a previous blurring layer; for the non-progressive blurring region in the same blurring layer, a filtering parameter σ with a larger value can be adopted to obtain a smoother filtering effect. Thereby enabling a smooth transition within the virtualization hierarchy.
And, as the blurring level increases, the value of the filter parameter σ may also increase in sequence. Thereby enabling a smooth transition between the various levels of blurring.
Further, in the gaussian filtering, the value of the filter parameter σ can be further determined according to the distance between the pixels to be processed. When the pixel point to be processed in the progressive blurring region is filtered by adopting a gaussian filtering mode, the value of the filtering parameter σ can be changed along with the change of the distance between the pixel point to be processed and the inner boundary of the progressive blurring region.
Exemplarily, when filtering the to-be-processed pixel points in the progressive blurring region, it may be considered that the to-be-processed pixel points close to the inner boundary of the progressive blurring region are largely kept as original values, and the to-be-processed pixel points far away from the inner boundary of the progressive blurring region are largely blurred. In order to obtain the above effect, a value of σ may be designed differently based on different distances between the pixel point to be processed in the progressive blurring region and the inner boundary of the progressive blurring region. For example, the value of σ may increase as the distance of the pixel point to be processed from the inner boundary of the progressive blurring region increases.
Therefore, the image blurring degree is weaker at the position close to the ROI, and the original value of the pixel point to be processed can be kept to a greater degree; at the position far away from the ROI, the smoothing effect is better, the image blurring is more obvious, and therefore smooth transition from the ROI to the progressive blurring area is achieved.
In this implementation, both progressive blurring regions and non-progressive blurring regions may be blurred in a gaussian filtering manner, and only values of the filter parameter σ used in different regions are different, thereby achieving blurring effects of different degrees.
It should be understood that the specific filtering methods listed above are only examples and should not be construed as limiting the application in any way. Furthermore, the above filtering methods may be used in combination or may be implemented separately. This is not a limitation of the present application. For example, gaussian filtering is applied to non-progressive blurring regions, and weight filtering is applied to progressive blurring regions; for another example, mean filtering is applied to the non-progressive blurring region, and gaussian filtering is applied to the progressive blurring region; for another example, gaussian filtering is applied to both the non-progressive blurring region and the progressive blurring region; also for example, mean filtering is used for non-progressive blurring regions and gaussian filtering is used for progressive blurring regions. This is not a limitation of the present application.
Based on the filtering manner listed above, the image processing apparatus can complete the blurring processing of each pixel point to be processed in the image to be processed, so as to obtain a blurred image. In the blurring processing process, a progressive blurring region is added between non-progressive blurring regions for transition, a progressive blurring region is also added between the ROI and the non-progressive blurring regions for transition, and the transition is slowly performed through at least one blurring layer, so that the obtained image is in clear-to-fuzzy smooth transition, the filtering boundary is not obvious, and the visual effect is good.
It should be understood that the specific process of blurring the progressive blurring region is described in detail above in connection with two different filtering modes. This should not be construed as limiting the application in any way. The present application is not limited to the specific way of blurring.
As described above, the image processing apparatus divides the region before performing the filtering process on the image. Optionally, the method further comprises: and determining progressive blurring regions in each blurring layer surrounding each ROI according to the position of each ROI and the width of the progressive blurring regions.
In particular, the position and size of the ROI may be different for different images to be processed. As the position and/or size of the ROI changes, the number and/or location of the levels of blurring that surround the ROI also changes. Therefore, the blurring layer can be determined according to the position of each ROI, and then progressive blurring regions in each blurring layer are determined. It will be appreciated that each level of blurring may include a progressive blurring region and a non-progressive blurring region, and thus, progressive blurring regions in each level of blurring, i.e., non-progressive blurring regions in each level of blurring, are determined.
Optionally, the method further comprises: and determining the progressive blurring region in each blurring layer according to the boundary of each blurring layer and the width of the progressive blurring region.
If the non-progressive blurring region and the progressive blurring region in each blurring layer are divided by the image processing apparatus, the image processing apparatus may further determine the progressive blurring region in each blurring layer according to the width of the progressive blurring region after determining the boundary of each blurring layer. Because each virtualization layer is composed of a progressive virtualization region and a non-progressive virtualization region which are adjacent, the progressive virtualization region in each virtualization layer is determined, and the non-progressive virtualization region in each virtualization layer is determined.
In some cases, the blurring layer of the image to be processed changes according to the change of the position of the ROI, and therefore, the progressive blurring region in the blurring layer is determined according to the boundary of the blurring layer and the width of the progressive blurring region, which may also be considered to be determined according to the position of the ROI and the width of the progressive blurring region.
In one possible design, the ratio of the width of the progressive blurring region to the size of the image to be processed is a predefined value.
For example, assuming that the ratio of the width of the progressive blurring region to the size of the image to be processed is defined as β in advance, in the x-axis direction, the ratio of the width of the progressive blurring region to the size of the image to be processed in the x-axis direction may be β; in the y-axis direction, the ratio of the width of the progressive blurring region to the size of the image to be processed in the y-axis direction may also be β. Alternatively, β is 1/32. Alternatively, β is 1/64. Alternatively, it should be understood that the values of β are not limited to those listed above. The specific value of the ratio β is not limited in the present application.
In another possible design, the width of the progressive blurring region is a predefined value.
For example, the width of the progressive blurring region may be predefined to be a fixed value, such as 50 pixels. For another example, the width of the progressive blurring region in the x-axis direction may be predefined to be a certain fixed value, the width in the y-axis direction may be predefined to be another fixed value, and so on. For the sake of brevity, this is not illustrated individually.
Optionally, the width of the non-progressive blurring region is the same as the width of the progressive blurring region. For example, if the ratio of the width of the progressive blurring region to the size of the image to be processed is set to a predefined value, the ratio of the width of the non-progressive blurring region to the size of the image to be processed is also a predefined value. In the case of the size determination of the image to be processed, the width of the progressive blurring region and the width of the non-progressive blurring region can be directly determined. For example, if the width of the progressive blurring region is a fixed value, the width of the non-progressive blurring region is also a fixed value.
Optionally, the width of the non-progressive blurring region is different from the width of the progressive blurring region. For example, the width of the progressive blurring region may be smaller than the width of the non-progressive blurring region, or the width of the progressive blurring region may be larger than the width of the non-progressive blurring region. This is not a limitation of the present application.
Optionally, the width of the progressive blurring region is related to the width and/or the filter radius of the non-progressive blurring region in the blurring hierarchy.
The progressive blurring region may also be related to the width of the non-progressive blurring region in the belonging blurring hierarchy and/or the filter radius of the non-progressive blurring region. For example, the width of the progressive blurring region may be the same as the width of the non-progressive blurring region in the belonging blurring hierarchy, that is, the width of the progressive blurring region may be 1/2 of the width of the belonging blurring hierarchy. For another example, the width of the progressive blurring region may be 2/3 of the width of the non-progressive blurring region in the belonging blurring hierarchy. For another example, the width of the progressive blurring region may be M (M is a positive integer) times the filtering radius of the non-progressive blurring region in the belonging blurring hierarchy. The value of M may be predefined.
It should be understood that the above is merely for ease of understanding, and that several examples of the relationship between the width of progressive blurring regions and the width or filter radius of non-progressive blurring regions are given by way of example. This should not be construed as limiting the application in any way.
It should also be appreciated that, since each virtualization layer includes progressive virtualization regions and non-progressive virtualization regions, the determination of progressive virtualization regions in each virtualization layer also completes the determination of non-progressive virtualization regions.
Further optionally, the method further comprises: and determining the number of the blurring layers according to the transmission bandwidth of the image to be processed.
The photosensitive element can transmit the image to the image processing device for real-time image processing after acquiring the image. The collected images can sequentially arrive in the storage device in a row unit for real-time calculation, and the processed data can be covered by newly arrived data.
Taking a shift register (LineBuffer) as an example, one line of linebuffers is used to store one line of image data. During filtering operation, according to different sizes of filtering kernels, LineBuffer resource expenses of different scales are needed. For filtering, the size of the LineBuffer overhead may be 2 times the filtering radius.
E.g., filter radius of 2, overhead of 4 lines. Specifically, when the filtering radius is 2, 5 lines of data are required for one filtering operation. In the beginning stage, after the current 4 lines come and are stored in the 4 lines LineBuffer in sequence, the filtering operation is started along with the arrival of the pixels in the 5 th line. With the filtering, the fifth line image data is stored in the 1 st line in the LineBuffer, and the 1 st line data of the image stored in the original 1 st line is overwritten, and the process is repeated. The LineBuffer overhead is thus 2 times the filter radius.
Therefore, in the embodiment of the application, a plurality of filtering modes with different expenses are provided, so that the filtering modes are switched in real time according to computing resources, the optimal blurring effect is realized, and the bandwidth consumption during transmission is reduced to the greatest extent.
Here, the filtering mode may be defined based on a difference in blurring levels. Because the number of blurring layers is different, the filtering radius is also different, and the corresponding LineBuffer overhead is also different.
Fig. 11 shows different filtering modes provided by the embodiment of the present application. In the three filtering modes shown as a), b) and c) in fig. 11, the blurring levels are sequentially decreased. The number of blurring levels shown by a) in fig. 11 is 4, the number of blurring levels shown by b) in fig. 11 is 3, and the number of blurring levels shown by c) in fig. 11 is 2. With the reduction of the blurring layers, the filtering radius corresponding to the blurring layer at the outermost layer is also reduced, and the corresponding LineBuffer overhead is also reduced. For example, in a) of fig. 11, the filtering radii of the 4 blurring layers from inside to outside may be 1, 2, 3, and 4 in sequence, and then the corresponding LineBuffer overhead is 8 lines; in b) of fig. 11, the filtering radii of the 3 blurring layers from inside to outside may be 1, 2, and 3 in sequence, and the corresponding LineBuffer overhead is 6 lines; in c) of fig. 11, the filtering radii of the 2 blurring layers from inside to outside may be 1 and 2 in sequence, and the corresponding LineBuffer overhead is 4 lines.
Therefore, the LineBuffer resources required in various filtering modes can be calculated in advance, and the filtering modes can be adjusted in real time according to the transmission bandwidth provided by the system. In other words, a reasonable filtering mode can be adopted by pre-calculating the required LineBuffer resources, so that the optimal blurring effect is realized. Meanwhile, the size of the code stream of the coding can be reduced to a greater extent, and the bandwidth occupation during transmission is reduced.
It should be understood that, for convenience of understanding, the correspondence between different filtering modes and different LineBuffer overheads is described by taking the LineBuffer overheads as an example, but this should not limit the present application in any way. The application does not limit the storage device used in the filtering process. It will be appreciated that the amount of computational resources required may be related to the filtering scheme employed, even if other memory devices are employed.
After determining the position and size of each region in the image to be processed and the filtering mode, the image processing apparatus may perform blurring on the image to be processed by using the corresponding filtering mode based on the region where each pixel point to be processed is located.
It should be understood that the above is merely for ease of understanding, three possible filtering modes are shown, but this should not be construed as limiting the present application in any way. The image processing apparatus may be preconfigured with more or fewer filtering modes. For example, the image processing apparatus may set only one filter mode, and blur the image to be processed based on the filter mode configured in advance.
Further, the width of each virtualization layer may also be predefined.
Here, the width of the blurring hierarchy can be understood in terms of the following definitions: the width of the latter virtualization layer may refer to the width of a ring-shaped or stripe-shaped region surrounding the former virtualization layer, or the width of a virtualization layer next to the ROI may refer to the width of a ring-shaped or stripe-shaped region surrounding the ROI, as shown by d1 in a) of fig. 8.
Alternatively, the width of the blurring layer may be set according to the size of the image to be processed. The ratio of the width of each blurring level to the size of the image to be processed may be a predefined value.
For example, the ratio of the width of each blurring layer to the size of the image to be processed is a predefined value in the x-axis direction and the y-axis direction, respectively. Illustratively, the ratio of the width of the blurring layers from inside to outside to the size of the image to be processed is 0.25, 0.16, 0.1, 0.08, and the like in sequence. For another example, the ratio of the width of each blurring level to the size of the image to be processed is 1/16. For the sake of brevity, this is not to be enumerated here. The specific value of the ratio of the width of each blurring layer to the size of the image to be processed is not limited in the present application.
As shown, the ghosting hierarchy has an inner boundary and an outer boundary. Wherein the inner boundary is closer to the ROI than the outer boundary. In one implementation, the inner boundary size and the outer boundary size may be determined according to the size of the image to be processed, and further the width of the blurring layer may be obtained. Wherein, the inner boundary size may specifically refer to a distance between two opposite inner boundaries of the blurring hierarchy, as shown as d2 in a) of fig. 8; the outer boundary dimension may specifically refer to the distance between two opposite outer boundaries of the blurring hierarchy, as indicated by d3 in a) of fig. 8. It is understood that for two adjacent virtualization layers, the outer boundary of the previous virtualization layer may be the inner boundary of the next virtualization layer, and thus the outer boundary size of the previous virtualization layer may be the inner boundary size of the next virtualization layer. Accordingly, for a virtualization layer immediately adjacent to the ROI, the width of the ROI may be the inner boundary size of the virtualization layer.
The outer boundary size or the inner boundary size of the plurality of virtualization layers may increase sequentially in an inside-out order. Illustratively, in the directions of the x axis and the y axis, the ratio of the size of the outer boundary of the plurality of blurring layers to the size of the image to be processed is 0.5, 0.6, 0.7 and 0.8 in sequence.
Further, the ROI and the regions of each blurring level may be deformed into a rectangle, i.e. an arc is added to the vertical boundary of the rectangle to reduce the boundary effect. Since this has been described in detail above in connection with b) in fig. 8, it is not repeated here for the sake of brevity.
In order to obtain a better filtering effect, the pixels to be processed on the boundary of the image to be processed can be expanded. Optionally, the method further comprises: and expanding the image to be processed according to the filtering radius corresponding to the region to which the pixel point at the boundary in the image to be processed belongs.
In an implementation manner, each boundary in the image to be processed may be used as a symmetry axis, and a filtering radius corresponding to a region to which a pixel point on the boundary belongs is used as an extension width to extend the image to be processed. This extension may be referred to as mirror extension for short.
Fig. 12 is a schematic diagram illustrating mirror expansion performed on a pixel point on a boundary of an image to be processed according to an embodiment of the present application. For the convenience of understanding, the pixel point at the upper left corner in the figure is taken as an example for illustration. The expansion in the y-axis direction may be performed by using a line of pixels on a boundary parallel to the x-axis as a symmetry axis (for example, referred to as symmetry axis 1, shown by a dotted line in the figure), and mirror-expanding the values of the pixels below the symmetry axis to corresponding positions above the symmetry axis. Assuming that the filtering radius corresponding to the region to which the pixel points on the boundary belong is 2, the values of two rows of pixel points below the symmetry axis can be mirror-extended to the corresponding positions above the symmetry axis. The expansion in the x-axis direction may take a row of pixels on a boundary parallel to the y-axis as a symmetry axis (for example, referred to as symmetry axis 2, shown by a dotted line in the figure), and mirror-expand the values of the pixels on the right side of the symmetry axis to corresponding positions on the left side of the symmetry axis. Taking the filtering radius 2 as an example, the values of two columns of pixel points on the right side of the symmetry axis are mirror-extended to the corresponding positions on the left side of the symmetry axis. Then, the method can be further expanded, for example, two columns of pixel points which are mirror-expanded to the left side of the symmetry axis 2 are mirror-expanded to the upper side of the symmetry axis 1 again by taking the symmetry axis 1 as the symmetry axis; or taking the symmetry axis 2 as a symmetry axis, and mirror-expanding the two rows of pixel points above the symmetry axis 1 to the left side of the symmetry axis 2 again. Thereby completing the expansion of the pixel point at the upper left corner. For the sake of distinction, pixels having the same pixel value are indicated by the same hatching. The value of each pixel point is not limited in the present application. In addition, the circles in the drawings illustrate the respective pixels, which are only illustrated for convenience of distinction and should not constitute any limitation to the present application.
It should be understood that the specific manner of expanding the pixels on the boundary of the image to be processed is not limited to the mirror image expansion described above. For example, 0-value padding may be used, or a pixel value at a boundary may be subjected to replica extension or the like. This is not a limitation of the present application. For the sake of brevity, the drawings are not necessarily described herein.
Optionally, the image to be processed comprises image data of one or more image channels; the virtual image is obtained by fusing the virtual image data of one or more image channels, and the virtual image data of the image channels comprises values obtained by filtering each pixel point to be processed in the image channels.
Experiments show that the filtering radiuses corresponding to the channels adopt different numerical values to virtualize the image to be processed, and no obvious influence is caused. Therefore, the blurring of multiple channels can be performed individually, and the filtering method and related parameters used can also be designed individually.
For example, the image to be processed may include image data of Y channel, U channel, and V channel. The blurred image may be a result of fusing image data obtained by blurring image data of the Y channel, the U channel, and the V channel, respectively. Wherein the image data of each channel may be blurred based on the method described above.
In one possible design, the filtering radii of the same pixel point to be processed in the image to be processed corresponding to different data channels are different.
The current major YUV formats include YUV444, YUV422, YUV520, and the like. In the method, the down-sampling operation exists on the image data of the two UV channels in the YUV422 and YUV520 data formats, which results in the image data being inconsistent in dimension of Y, U, V image data of the three channels. The filtering process can be performed by using different filtering radii for different channels.
For example, the U channel or the V channel may have a corresponding filter radius of 1/2 that is the filter radius of the Y channel. Therefore, the filter kernel corresponding to the U channel or the V channel is smaller than the filter kernel corresponding to the Y channel.
As an example, the image data for the Y channel may be divided into three levels of blurring, with corresponding filtering radii increasing sequentially from inside to outside. For example 1, 2 and 3 in order from the inside to the outside. The filter radii of the U and V channels may be designed to be 1/2 of the filter radius of the Y channel. In this case, the image data of the U channel and the V channel may not be divided into the blurring layers, and the filtering radius is uniformly set to 1. That is, except for the ROI, all the pixels to be processed in the U-channel and V-channel image data are filtered according to the filtering radius of 1.
In addition, when the pixel points on the boundary of the image to be processed are expanded, each channel can be separately expanded. Specifically, for each of the plurality of channels, the image data is expanded according to the filtering radius corresponding to the region to which the pixel point at the boundary belongs.
In the above example, the image data of the Y channel may be divided into three virtualization layers, and the filtering radii are 1, 2, and 3 from inside to outside, so that when the pixel points on the boundary are expanded, the image data is expanded by using the filtering radius 3 as the expansion width. In the U channel and the V channel, the image data is only divided into a virtualization layer, and when the filtering radius 1 is used as the extension width, the pixel points on the boundary are extended, the image data can be extended according to the filtering radius 1. It can be seen that the extension widths of the image data for the plurality of channels are different.
It should be understood that the above has been described with the Y channel, U channel, and V channel as examples to separately process (e.g., include filtering processing and expansion) image data of a plurality of channels for ease of understanding only. This should not be construed as limiting the application in any way. For example, the image to be processed may also be in RGB format, and the plurality of channels included in the image to be processed may also be R channel, G channel, and B channel, which are not listed here for brevity.
Since the image data of a plurality of channels can be processed individually, part of the channels, such as the Y channel in the above example, can be subjected to the filtering process, whereby the amount of calculation of the image processing apparatus can be reduced.
The process of the image blurring process is described in detail above with reference to fig. 4 to 12. In order to obtain better visual effect, a sharpening process may be further performed on a partial region in the image, such as a sharpening process on the ROI.
The ROI may be sharpened by a sharpening method in the prior art, or by a sharpening method provided in the embodiment of the present application, which is not limited in the present application.
An image processing method 500 according to another embodiment of the present application will be described in detail with reference to fig. 13. The image processing method 500 shown in fig. 13 includes steps 510 to 540. The steps in method 500 are described in detail below.
In step 510, a to-be-processed image is acquired.
Step 510 is the same as step 410. Reference may therefore be made to the description above regarding step 410 of method 400 for an explanation of step 510. For the sake of brevity, this is not repeated here.
In this embodiment, the image to be processed may include at least one ROI and at least one blurring layer, and each blurring layer may include a progressive blurring region and a non-progressive blurring region adjacent to each other. The ROI, the blurring layer, and the arrangement between the progressive blurring region and the non-progressive blurring region in the blurring layer are described in detail in the method 400 above with reference to the drawings, and for brevity, no further description is given here.
In step 520, each ROI of the at least one ROI is sharpened.
In the embodiment of the present application, each ROI may include a non-progressive sharpening region, a progressive sharpening region, and an original value holding region. For ease of understanding and explanation, reference is made below to FIG. 14.
Fig. 14 shows an example of an image to be processed. The image to be processed shown in fig. 14 includes the ROI and a blurring layer (including a progressive blurring region and a non-progressive blurring region) adjacent to the ROI. In the figure, the boundary of the ROI is indicated by a thick solid line for the sake of distinction. In the ROI, an original value holding region, a progressive sharpening region, and a non-progressive sharpening region may be included in order from outside to inside. In other words, the progressive sharpening region surrounds the non-progressive sharpening region, and the original value holding region surrounds the progressive sharpening region.
The original value holding area is an area that does not need to be processed. The pixel points in the original value holding area can be kept unchanged. The non-progressive sharpening region can be sharpened based on the existing sharpening mode. In order to alleviate poor experience brought by a filtering boundary between an original value holding area and a non-progressive sharpening area to vision, in the embodiment of the application, a progressive sharpening area is added between the original value holding area and the non-progressive sharpening area so as to realize smooth transition between the original value holding area and the non-progressive sharpening area.
It is to be understood that the ROI shown in fig. 14 may be, for example, the ROI described above in connection with fig. 8 and 9. For ease of understanding and explanation, the sharpening process for the progressively sharpened region in the ROI is described by taking any one of the at least one ROI as an example. It should be understood that the sharpening process of the image processing apparatus for each progressive sharpening region is similar.
In the embodiment of the application, the value of the pixel point to be processed in the progressive sharpening region is related to the distance between the pixel point to be processed and the outer boundary of the progressive sharpening region. Wherein an outer boundary of the progressively sharpened region is farther from the center of the ROI than an inner boundary of the progressively sharpened region.
Optionally, the sharpening process for the progressive sharpening region may specifically include:
extracting high-frequency information of a progressive sharpening region;
and determining the sharpened value of each pixel point to be processed based on the original value and the high-frequency information of each pixel point to be processed in the progressive sharpening region.
Wherein, the sharpened value of any one pixel point in the progressive sharpening region is positively correlated with the distance between the pixel point and the outer boundary of the progressive sharpening region.
The high frequency information may specifically include a difference between a frequency point greater than a preset threshold and a frequency domain (frequency domain) of the image to be processed, where the frequency domain is converted from a spatial domain to a frequency domain. For example, may include edges and other sharp variations (e.g., noise) that are predominantly at high frequencies in the gray levels of the image. The application does not limit the specific size of the preset threshold. The high frequency information has been described in detail above and will not be repeated here for the sake of brevity.
For ease of understanding and explanation, the following description will take one pixel in the progressive sharpening region as an example, and the pixel is referred to as the second pixel. The second pixel point may be any pixel point in the progressive sharpening region. The sharpened value of the second pixel point is positively correlated with the distance between the second pixel point and the outer boundary of the progressive sharpening region. In one possible design, the sharpened value y of the second pixel point may be represented by
Figure BDA0002824559820000291
And (4) calculating. In another possible design, the sharpened value y of the second pixel point may be represented by
Figure BDA0002824559820000292
And (4) calculating.
Wherein D issharpenRepresenting a distance between the second pixel point and an outer boundary of the progressively sharpened region; b issharpenRepresenting the width of the progressive sharpening region; y ishHigh frequency information, y, representing a second pixeloAnd representing the original value of the second pixel point.
It should be understood that the two formulas listed above for calculating the sharpened value of the second pixel point are only examples, and should not limit the present application in any way. The calculation of the sharpened value of the second pixel point is not limited to the two methods listed above. For example, it can also be at
Figure BDA0002824559820000301
Multiplied by a coefficient.
The extracting of the high-frequency information of the progressive sharpening region may specifically include:
filtering the progressive sharpening area to obtain initial high-frequency information of the progressive sharpening area;
and performing gain processing and limiting processing on the initial high-frequency information of the progressive sharpening area to obtain the high-frequency information of the progressive sharpening area.
The steps of extracting high frequency information, performing gain processing, performing limit processing, and the like in the sharpening process will be described in detail below with reference to the drawings, and will not be described in detail here.
In fact, in the process of sharpening the ROI, corresponding operations may be performed according to the region to which each pixel to be processed belongs. The above-described process of sharpening the progressive sharpened region is also applicable to the non-progressive sharpened region. However, it should be noted that the sharpening process for the non-progressive sharpening region may not consider the distance between the pixel point to be processed and the inner boundary or the outer boundary of the non-progressive sharpening region. In other words, the sharpening process for any one to-be-processed pixel point in the non-progressive sharpening region may be consistent and does not depend on the change of the position of the to-be-processed pixel point.
For ease of understanding, a specific process of sharpening the non-progressively sharpened region will be described in detail below with reference to fig. 15 and 17.
As shown in fig. 15, the image to be processed may be first input to a Low Pass Filter (LPF) to extract medium and low frequency information. The low-pass filter may extract the low-and-medium-frequency information by means of mean filtering or weighted filtering, for example. Specifically, a convolution operation may be performed through a preset low-pass filtering template to extract medium and low frequency information. The low-pass filter may, for example, use a 7 × 7 filter kernel for mean filtering; the low-pass filter may also perform weight filtering using a 5 × 5 filter kernel, for example; the size of the filtering radius and the configuration of the filtering weight of each pixel point are not limited in the application. The low and medium frequency information extracted by the low pass filter may also vary based on different filter radii and different configurations. This can be adjusted to the actual effect.
It should be understood that the above process of extracting the low and medium frequency information may be implemented by frequency domain filtering. For example, before the image to be processed is input to the low-pass filter, the image to be processed may be converted from the spatial domain to the frequency domain, where it is filtered using a filtering function to extract the low-and-medium frequency information.
Thereafter, the extracted low and medium frequency information can be used to subtract from the original image to extract the initial high frequency information. For example, the above-mentioned medium and low frequency information may be subtracted from the frequency information of the image to be processed converted into the frequency domain, whereby high frequency information of the image to be processed may be obtained. The high frequency information thus obtained may also carry much noise and require further processing, and is therefore referred to as initial high frequency information. Thereafter, the initial high frequency information may be gain processed. The gain processing of the initial high frequency information aims to eliminate noise and enhance the obvious detail information. The gain processing of the initial high-frequency information may determine a value after the gain according to the value of the initial high-frequency information corresponding to each pixel point. For example, the gain coefficients corresponding to the respective pixel points may be determined based on the values of the initial high-frequency information corresponding to the respective pixel points. This process may be referred to as feathering. And then multiplying the value of the initial high-frequency information corresponding to each pixel point by the corresponding gain coefficient to obtain a value after gain.
FIG. 16 shows a schematic of feathering. As shown in fig. 16, the horizontal axis represents the value of the input initial high frequency information, denoted by x; the vertical axis represents the output feathered value, in wxAnd (4) showing. Less than a predetermined value (e.g., denoted as a first predetermined value, see p in the figure) in the initial high frequency information0) Can take the value of w0Greater than another preset value (e.g. denoted as a second preset value, see p in the figure)1) Can take the value of w1. The value between the first preset value and the second preset value may take the value kX (x-p)0). Where k represents the slope, can be represented by (w)1-w0)/(p1-p0) And (4) determining.
The above-described process of feathering can be expressed, for example, by the following formula:
Figure BDA0002824559820000311
exemplarily, w0Is 0, w1Is 1, p0Is 0 or 1, p1Is 2.
After the gain factor is obtained, high frequency information (e.g., at (-p)) with small input value can be obtained0,p0) High frequency information in between) by w0Here w0Taking the value to be 0, namely directly removing the value. Inputting high-frequency information with larger value (such as larger than p)1Or less than-p1) Multiplying the high frequency information by w1Here w1Taking the value 1, i.e. retaining the value. Therefore, the high frequency information after the gain processing is noise-removed, and the detail information is retained.
The high-frequency information after the gain processing can be further subjected to limiting processing so as to cut off the excessively high-frequency information. High frequency information of the image to be processed can thereby be obtained.
The high-frequency information of the image to be processed can be further overlapped with the input image to be processed to obtain a sharpened image. It can be seen that the detail information in the image at this time is enhanced.
It can be seen from the above process that the sharpened value z of any pixel point to be processed in the non-progressive sharpening region can be zh+z0Thus, the determination is made. Wherein z is0Representing the pixel to be processedOriginal value of point, zhAnd the value of the pixel point to be processed after the high-frequency signal in the image to be processed is converted into the airspace is represented.
It should be appreciated that the sharpening process is described in detail above in conjunction with a formula for ease of understanding. This should not be construed as limiting the application in any way. In specific embodiments, z is as defined above0、zhEtc. may be intermediate variables only, and there is not necessarily an output.
It should also be understood that the implementation of sharpening is not limited to the process described above in FIG. 15. FIG. 17 illustrates another implementation of sharpening.
As illustrated in fig. 17, the image to be processed may be first input into a low-pass filter to extract medium and low frequency information, respectively. In order to distinguish between low and medium frequency information, two low pass filters of different filter radii may be provided. As previously mentioned, the larger the filtering radius, the more blurred the filtered image. Therefore, the filtering radius of one low-pass filter can be set to be 2, and the corresponding filtering kernel is 5 × 5, so as to extract medium and low frequency information; the filter radius of the other low-pass filter is set to 3 and the corresponding filter kernel is 7 x 7 for extracting low-frequency information.
Thereafter, the extracted low and medium frequency information can be used to subtract from the original image to obtain the initial high frequency information. This is the same as the process in fig. 15 and is not repeated here for the sake of brevity.
The extracted low and medium frequency information may also be used to subtract from the extracted low frequency information to obtain initial intermediate frequency information. This process is similar to the process of obtaining high frequency information described above and, for brevity, will not be described in detail here.
It will be appreciated that both the initial intermediate frequency information and the initial high frequency information may carry noise and require further processing.
Thereafter, the initial intermediate frequency information and the initial high frequency information may be subjected to gain processing and limiting processing, respectively, to obtain detail information of the image. And finally, overlapping the detail information with the input image to be processed to obtain a sharpened image. So that the detail information in the image is enhanced.
Since the specific procedures of the gain processing and the limiting processing have been described in detail above in conjunction with fig. 15 and 16, they are not described here again for brevity.
It should be understood that the above-described process of sharpening a non-progressively sharpened region is only an example, and the specific implementation manner of the present application is not limited thereto. Since the sharpening process for the non-progressively sharpened region can refer to the method in the prior art, the detailed description is omitted here for the sake of brevity.
In the embodiment of the present application, in combination with the above-described sharpening process for the non-progressively sharpened region, the image processing apparatus needs to further consider the distance between the pixel point to be processed and the outer boundary of the progressively sharpened region in the sharpening process for the progressively sharpened region.
As mentioned above, the sharpened value of any one pixel point in the progressive sharpening region can be represented by
Figure BDA0002824559820000321
Or
Figure BDA0002824559820000322
And (4) calculating. Therefore, after extracting the high frequency information of the image to be processed, the high frequency information can be converted into the spatial domain to obtain the value of each pixel point, such as yhThen based on
Figure BDA0002824559820000331
Determining the sharpened value of each pixel point; or, after extracting the high-frequency information of the image to be processed, directly superposing the high-frequency information with the frequency information converted from the original image into the frequency domain, and converting the superposed result into the spatial domain, thereby obtaining the value y of each pixel point after superpositionh+yoAnd can further be based on
Figure BDA0002824559820000332
And determining the sharpened value of each pixel point.
Based on the above processing, detail information in the image is enhanced.
It should be appreciated that the sharpening process is described in detail above in conjunction with a formula for ease of understanding. This should not be construed as limiting the application in any way. In practice, y is as defined aboveh、y0Etc. may be intermediate variables only, and there is not necessarily an output.
It should also be understood that the above-described sharpening process for determining pixel values based on the distance of each pixel point from the outer boundary of the progressive sharpening region is merely an example, and the present application is not limited to specific implementations thereof.
It can be understood that the sharpened image may be obtained by sharpening values of pixel points in a progressive sharpening region and a non-progressive sharpening region in the image to be processed.
Compared with the sharpened value of the pixel point to be processed in the non-progressive area, the distance between the sharpening attention of the pixel point to be processed in the progressive sharpening area and the outer boundary of the progressive sharpening area is increased, the sharpened value is also increased along with the increase of the distance, the sharpening degree of the detail information is higher, and the enhancement degree of the detail information is higher. After the progressive sharpening region is transited to the non-progressive sharpening region, the original value of the pixel point is directly superposed with the high-frequency information, and the enhancement degree of the detail information is maximized. Therefore, smooth transition from the original value holding area to the progressive sharpening area and then to the non-progressive sharpening area can be realized, the boundaries of the areas are not obvious, and the sharpening effect is more and more obvious. Thus leading to a better visual effect.
It should be understood that the implementation of sharpening is not limited to the two implementations listed in the embodiments of the present application. The high frequency information can be extracted directly using a high pass filter, for example. Since specific implementations of sharpening can be found in the prior art, they will not be described in detail here for the sake of brevity.
The image to be processed may comprise image data of one or more channels, corresponding to blurring. The sharpened image may also be a result of fusing image data obtained by respectively sharpening image data of at least one of the one or more channels and image data that has not been sharpened.
In other words, in the case where the image to be processed includes image data of a plurality of channels, only image data of a part of the channels may be sharpened.
For example, the image to be processed includes image data of Y channel, U channel, and V channel. Since human eyes are sensitive to luminance information, only image data of the Y channel can be sharpened, thereby reducing computational complexity.
In this case, the sharpened image may be obtained by fusing image data obtained by sharpening image data of the Y channel and image data of the U channel and the V channel that are not sharpened.
Corresponding to the blurring process, when sharpening the ROI, it may also be necessary to expand the pixel points on the boundary. In the image to be processed shown in fig. 14, the progressive sharpening region is surrounded by the ROI, so that pixel points on the outer boundary of the progressive sharpening region can be expanded. For example, the detailed expansion manner can be referred to the description made above in conjunction with fig. 12, and is not described here again for brevity.
It can be understood that, if only the image data of a part of the channels (for example, the Y channel) is sharpened, only the pixel points on the boundary in the part of the channels may be expanded.
It should be understood that the YUV format is merely exemplary for ease of understanding and explanation and should not constitute any limitation of the present application. The application does not limit the specific format of the image to be processed. For example, the image to be processed may also be in RGB format, and the plurality of channels included in the image to be processed may also be R channel, G channel, and B channel, which are not listed here for brevity.
Optionally, the method 500 further comprises step 530: and blurring each pixel point to be processed in at least one blurring layer.
Step 530 may specifically include: filtering each pixel point to be processed in at least one progressive blurring region in at least one blurring layer; and filtering each pixel point to be processed in at least one non-progressive blurring region in at least one blurring layer.
It should be understood that the splitting of step 530 into two steps is shown here only for the convenience of distinguishing the filtering manner of progressive and non-progressive blurring regions, and the order of execution should not constitute any limitation.
Step 530 is the same as step 420. Reference may therefore be made to the description above regarding step 420 in method 400 for an explanation of step 530. For the sake of brevity, this is not repeated here.
It should be understood that a number of possible implementations are shown above for step 420 in method 400. Any of the above possible implementations may be combined with step 520 in this embodiment to implement different operations on different areas of the image to be processed.
Optionally, the image to be processed comprises image data of one or more image channels; the virtual image is obtained by fusing the virtual image data of one or more image channels, and the virtual image data of the image channels comprises values obtained by filtering each pixel point to be processed in the image channels.
Optionally, the method 500 further comprises: and determining progressive blurring regions in each blurring layer surrounding each ROI according to the position of each ROI and the width of the progressive blurring regions.
Optionally, the method 500 further comprises: and determining the progressive blurring region in each blurring layer according to the boundary of each blurring layer and the width of the progressive blurring region.
Optionally, the method 500 further comprises: and determining the number of the blurring layers according to the transmission bandwidth of the image to be processed.
Optionally, the method 500 further comprises: and expanding the image to be processed according to the filtering radius corresponding to the region to which the pixel point at the boundary in the image to be processed belongs.
It should be understood that the above steps of filtering the pixels to be processed in one or more image channels in the image to be processed, determining the progressive blurring region, determining the number of blurring levels, and expanding the image to be processed may all refer to the related description in the above method 400, and are not repeated here for brevity.
Optionally, the method 500 further comprises step 540: and outputting the processed image.
In combination with the blurring processing and the sharpening processing of the image to be processed described above, the processed image in the embodiment of the present application may be an image obtained by blurring each blurring layer and sharpening the ROI. The image processing device may fuse the blurred image and the sharpened image to obtain and output a processed image.
It should be understood that the blurring process and the sharpening process described above are not necessarily performed as two separate processes. The processes of step 520 and step 530 are shown for ease of understanding only, and should not be construed as limiting the order of execution. As described above, after the image to be processed is input to the image processing apparatus, the image processing apparatus may process the image according to the region to which each pixel point to be processed belongs by using a corresponding processing method. Alternatively, the image processing apparatus may divide the blurring process and the sharpening process into two processes, and the two processes may be processed in parallel or in series. This is not a limitation of the present application.
It should also be understood that the blurring process and the sharpening process may be two separate operations, and may be implemented separately or in combination in the image processing field. And when used in combination, the application does not limit the specific filtering mode used in the blurring process. A number of possible combinations have been listed above and will not be described here for the sake of brevity.
Based on the technical scheme, the progressive blurring and the progressive sharpening are combined, the progressive sharpening can be achieved from outside to inside of the ROI, and the progressive blurring is achieved from inside to outside of the region outside the ROI, so that the processed image can be in smooth transition from inside to outside, bad experience brought to vision by a filtering boundary is relieved, and a better visual effect is brought to people.
For ease of understanding, the following will briefly describe a specific flow of the method provided in the embodiment of the present application with reference to fig. 18. Fig. 18 is another schematic flowchart of an image processing method provided in an embodiment of the present application.
As shown in fig. 18, the image to be processed and the configuration are input into the image processing apparatus. Optionally, after acquiring the image to be processed and the configuration, the image processing apparatus may perform region division on the image to be processed according to the configuration. The image processing device may further expand the pixel points located on the boundary of the image to be processed according to the divided regions, for example, by adopting a mirror image expansion mode. And blurring and sharpening the expanded image respectively. It can be understood that based on the difference of the regions to which the pixels to be processed belong in the image, part of the pixels in the image can be blurred and sharpened. The blurred and sharpened values of the pixels can be used for determining a blurred image area and a sharpened image area respectively. And fusing the blurred image and the sharpened image to obtain a processed image. The image processing apparatus may output the processed image, for example, through a communication interface.
Since the above processes of region division, expansion, blurring, sharpening, and fusion have been described in detail in the above method embodiments 400 and 500 with reference to a plurality of drawings, for brevity, no further description is given here.
The image processing method according to the embodiment of the present application is described in detail with reference to fig. 4 to 18. Hereinafter, an image processing apparatus according to an embodiment of the present application will be described in detail with reference to fig. 19 and 20.
Fig. 19 is a schematic block diagram of an image processing apparatus 1000 provided in an embodiment of the present application. As shown in fig. 19, the apparatus 1000 includes: an acquisition unit 1100 and a blurring unit 1200.
The obtaining unit 1100 may be configured to obtain an image to be processed, where the image to be processed includes an adjacent progressive blurring region and a non-progressive blurring region; the blurring unit 1200 may be configured to perform filtering processing on the progressive blurring region by the first filtering core, and perform filtering processing on the non-progressive blurring region based on the second filtering core; wherein each of the first and second filter kernels comprises an inner value point and an outer value point, and the outer value point in each filter kernel surrounds the inner value point; the filtering weight of the internal value points in the first filtering kernel is greater than that of the internal value points in the second filtering kernel, and the number and the position of the internal value points in the first filtering kernel are the same as those of the internal value points in the second filtering kernel.
Optionally, a filter radius of the first filter kernel is the same as a filter radius of the second filter kernel.
Optionally, a center of the internal value point of the first filter kernel coincides with a center of the first filter kernel, and a center of the internal value point of the second filter kernel coincides with a center of the second filter kernel.
Optionally, the filtering weight of the internal value point in the first filtering kernel is w1The filtering weight of the internal value point in the second filtering kernel is w2,w1=(1+w2) 2; wherein, w1And w2Are all positive numbers.
Optionally, the image to be processed includes at least one blurring level, and each blurring level includes the non-progressive blurring region and the progressive blurring region that are adjacent to each other.
Optionally, in each virtualization layer, the non-progressive virtualization region surrounds the progressive virtualization region.
Optionally, the apparatus 1000 further includes a determining unit, configured to determine a progressive blurring region in each blurring layer according to the boundary of each blurring layer and the width of the progressive blurring region.
Optionally, the width of the progressive blurring region is a predefined value.
Optionally, the width of the progressive blurring region is related to the width and/or the filtering radius of the non-progressive blurring region in the blurring hierarchy.
Optionally, the image to be processed further includes at least one ROI, each ROI is surrounded by at least one blurring layer, and the at least one blurring layer surrounding one ROI is sequentially distributed from inside to outside.
Optionally, the image to be processed further includes at least one region of interest ROI, each ROI is surrounded by at least one blurring layer, and at least one blurring layer surrounding one ROI is sequentially distributed from inside to outside, wherein in each blurring layer, the non-progressive blurring region surrounds the progressive blurring region.
Optionally, the boundary of the ROI comprises a line that is not parallel and/or perpendicular to a horizontal boundary of the image to be processed.
Optionally, in the at least one blurring layer surrounding one ROI, the filtering radii corresponding to the non-progressive blurring regions sequentially increase from inside to outside.
Optionally, in the at least one blurring layer surrounding one ROI, the filtering radius corresponding to the non-progressive blurring region is a continuous positive integer.
Optionally, the number of the at least one blurring layer is determined by a transmission bandwidth.
Optionally, the filtering manner of the non-progressive blurring region in the image to be processed is mean filtering or gaussian filtering.
Optionally, the image data of the image to be processed includes image data of one or more image channels; the blurring unit 1200 is further configured to, for each image channel of the one or more image channels, perform filtering processing on a value of each pixel point based on a filtering kernel corresponding to a region to which each pixel point belongs, so as to obtain filtered image data corresponding to each image channel; and the image data after filtering of the one or more image channels is subjected to fusion processing to obtain a virtualized image.
Optionally, the one or more image channels include a Y channel, a U channel, and a V channel, and the image data of the image to be processed includes image data of the Y channel, the U channel, and the V channel.
Optionally, the number of pixel points of the Y channel is different from the number of pixel points of the U channel or the number of pixel points of the V channel.
Optionally, in the image to be processed, the number of pixel points of the Y channel is twice the number of pixel points of the U channel or the V channel.
Optionally, a filtering radius of the same to-be-processed pixel point in the to-be-processed image in the U channel or the V channel is 1/2 of the filtering radius corresponding to the Y channel.
Optionally, the apparatus 1000 further includes an expanding unit, configured to expand the image to be processed according to a filtering radius corresponding to a region to which a pixel point located at a boundary in the image to be processed belongs.
Optionally, the expansion unit is specifically configured to expand the image to be processed by taking each boundary in the image to be processed as a symmetry axis and taking a filtering radius corresponding to a region to which a pixel point on the boundary belongs as a symmetry width.
Optionally, the extension widths of the image data for the plurality of channels are different from each other.
Optionally, the apparatus 1000 further includes a sharpening unit 1300 configured to sharpen each ROI of the at least one ROI to obtain a sharpened image.
It is to be understood that the image processing device 1000 shown in fig. 19 may correspond to the image processing device in each of the above embodiments. The specific processes of each unit for executing the corresponding steps are already described in detail in the above method embodiments, and are not described herein again for brevity.
It is also understood that the obtaining unit in the apparatus 1000 may be implemented by a communication interface, for example, may correspond to the communication interface in fig. 20; the blurring unit, the determining unit, the sharpening unit, and the expanding unit may be implemented by at least one processor, and may correspond to a processor in the image processing apparatus shown in fig. 20, for example.
Fig. 20 is another schematic block diagram of an image processing apparatus 2000 provided in an embodiment of the present application. As shown in fig. 20, the apparatus 2000 includes: a communication interface 2100, a processor 2200, and a memory 2300. Wherein, the memory 2300 stores programs, the processor 2200 is configured to execute the programs stored in the memory 2300, the execution of the programs stored in the memory 2300 causes the processor 2200 to be configured to execute the relevant processing steps in the above method embodiments, and the execution of the programs stored in the memory 2300 causes the processor 2200 to control the communication interface 2100 to execute the relevant steps of the above method embodiments for obtaining and outputting. In one possible design, the image processing device 2000 is a chip.
In implementation, the steps in the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here.
It should be noted that the processor in the embodiments of the present application may be an integrated circuit chip having signal processing capability. In implementation, the steps of the above method embodiments may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The processor described above may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
It will be appreciated that the memory in the embodiments of the present application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example, but not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), double data rate SDRAM, enhanced SDRAM, SLDRAM, Synchronous Link DRAM (SLDRAM), and direct rambus RAM (DR RAM). It should be noted that the memory of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
According to the method provided by the embodiment of the present application, the present application further provides a computer program product, which includes: computer program code which, when run on a computer, causes the computer to perform the method of any of the embodiments shown in figure 4, figure 13 or figure 17.
There is also provided a computer readable medium having program code stored thereon, which when run on a computer causes the computer to perform the method of any one of the embodiments shown in fig. 4, 13 or 17, according to the method provided by the embodiments of the present application.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware or any other combination. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (49)

1. An image processing method, comprising:
acquiring an image to be processed, wherein the image to be processed comprises an adjacent progressive blurring region and a non-progressive blurring region;
filtering the progressive blurring region based on a first filtering core, and filtering the non-progressive blurring region based on a second filtering core; wherein each of the first and second filter kernels comprises an inner value point and an outer value point, and the outer value point in each filter kernel surrounds the inner value point; the filtering weight of the internal value points in the first filtering kernel is greater than that of the internal value points in the second filtering kernel, and the number and the position of the internal value points in the first filtering kernel are the same as those of the internal value points in the second filtering kernel.
2. The method of claim 1, wherein a filter radius of the first filter kernel is the same as a filter radius of the second filter kernel.
3. A method as claimed in claim 1 or 2, characterized in that the center of the internal value point of the first filter kernel coincides with the center of the first filter kernel and the center of the internal value point of the second filter kernel coincides with the center of the second filter kernel.
4. A method as claimed in any one of claims 1 to 3, characterized in that the filtering weight of an internal value point in the first filtering kernel is w1The filtering weight of the internal value point in the second filtering kernel is w2,w1=(1+w2) 2; wherein, w1And w2Are all positive numbers.
5. The method according to any one of claims 1 to 4, wherein the image to be processed comprises at least one level of blurring, each level of blurring comprising the non-progressive blurring region and the progressive blurring region adjacent thereto.
6. The method of claim 5, wherein the method further comprises:
and determining the progressive blurring region in each blurring layer according to the boundary of each blurring layer and the width of the progressive blurring region.
7. The method of claim 6, wherein a width of the progressive blurring region is a predefined value.
8. The method of claim 6, wherein the width of the progressive blurring region is related to the width and/or the filter radius of a non-progressive blurring region in the blurring hierarchy.
9. The method according to any one of claims 5 to 8, wherein the image to be processed further comprises at least one region of interest, each region of interest is surrounded by at least one blurring layer, and the at least one blurring layer surrounding one region of interest is distributed from inside to outside in sequence; wherein, in each virtualization layer, the non-progressive virtualization region surrounds the progressive virtualization region.
10. The method of claim 9, wherein the boundaries of the region of interest comprise lines that are not parallel and/or perpendicular to horizontal boundaries of the image to be processed.
11. The method according to claim 9 or 10, wherein in at least one blurring hierarchy surrounding a region of interest, the filtering radii corresponding to non-progressive blurring regions increase sequentially from inside to outside.
12. The method of claim 11, wherein in the at least one blurring hierarchy surrounding a region of interest, the non-progressive blurring regions have respective filter radii that are continuous positive integers.
13. The method of any of claims 5 to 12, wherein the number of said at least one virtualisation level is determined by a transmission bandwidth.
14. The method according to any one of claims 1 to 13, wherein the filtering mode for the non-progressive blurring region in the image to be processed is mean filtering or gaussian filtering.
15. The method of any of claims 1 to 14, wherein the image data of the image to be processed comprises image data of one or more image channels;
the filtering processing is performed on the value of each pixel point in the image to be processed based on the filtering kernel corresponding to the region to which each pixel point belongs in the image to be processed, so as to obtain the blurred image, and the filtering processing includes:
for each image channel in the one or more image channels, filtering the value of each pixel point based on the corresponding filtering kernel of the region to which each pixel point belongs to obtain filtered image data corresponding to each image channel;
and carrying out fusion processing on the filtered image data of the one or more image channels to obtain a virtual image.
16. The method of claim 15, wherein the one or more image channels include a Y channel, a U channel, and a V channel, and the image data of the image to be processed includes image data of the Y channel, the U channel, and the V channel.
17. The method of claim 16, wherein the number of pixel points of the Y channel is different from the number of pixel points of the U channel or the V channel.
18. The method according to claim 17, wherein the number of pixel points of the Y channel in the image to be processed is twice the number of pixel points of the U channel or the V channel.
19. The method as claimed in claim 18, wherein the filtering radius of the same pixel point to be processed in the image to be processed in the U channel or the V channel is 1/2 of the filtering radius of the same pixel point to be processed in the Y channel.
20. The method of any one of claims 1 to 19, further comprising:
and expanding the image to be processed according to the filtering radius corresponding to the region to which the pixel point at the boundary in the image to be processed belongs.
21. The method according to claim 20, wherein the expanding the image to be processed according to the filtering radius corresponding to the region to which the pixel point at the boundary in the image to be processed belongs comprises:
and expanding the image to be processed by taking each boundary in the image to be processed as a symmetry axis and taking the filtering radius corresponding to the region to which the pixel points on the boundary belong as an expansion width.
22. The method according to claim 20 or 21, wherein when the image to be processed includes image data of a plurality of channels, the expanding the image to be processed according to the filtering radius corresponding to the region to which the pixel point at the boundary in the image to be processed belongs includes:
and for each channel in the plurality of channels, expanding the image to be processed according to the filtering radius corresponding to the region to which the pixel point at the boundary belongs.
23. The method of claim 22, wherein the extension widths of the image data for the plurality of channels are different from each other.
24. An image processing apparatus characterized by comprising:
a memory for storing a computer program;
a processor for invoking the computer program, which when executed by the processor, causes the apparatus to perform the steps of:
acquiring an image to be processed, wherein the image to be processed comprises an adjacent progressive blurring region and a non-progressive blurring region;
filtering the progressive blurring region based on a first filtering core, and filtering the non-progressive blurring region based on a second filtering core; wherein each of the first and second filter kernels comprises an inner value point and an outer value point, and the outer value point in each filter kernel surrounds the inner value point; the filtering weight of the internal value points in the first filtering kernel is greater than that of the internal value points in the second filtering kernel, and the number and the position of the internal value points in the first filtering kernel are the same as those of the internal value points in the second filtering kernel.
25. The apparatus of claim 24, wherein a filter radius of the first filter kernel is the same as a filter radius of the second filter kernel.
26. Apparatus according to claim 24 or 25, wherein the centre of the first filter kernel's internal value point coincides with the centre of the first filter kernel, and the centre of the second filter kernel's internal value point coincides with the centre of the second filter kernel.
27. The apparatus according to any one of claims 24 to 26, wherein the filtering weight of an internal value point in the first filtering core is w1The filtering weight of the internal value point in the second filtering kernel is w2,w1=(1+w2) 2; wherein, w1And w2Are all positive numbers.
28. The apparatus according to any of claims 24 to 27, wherein the image to be processed comprises at least one level of blurring, each level of blurring comprising the non-progressive blurring region and the progressive blurring region that are adjacent.
29. The apparatus of claim 28, wherein the computer program, when executed by the processor, causes the apparatus to further perform the steps of:
and determining the progressive blurring region in each blurring layer according to the boundary of each blurring layer and the width of the progressive blurring region.
30. The apparatus of claim 29, wherein a width of the progressive blurring region is a predefined value.
31. The apparatus of claim 29, wherein a width of the progressive blurring region is related to a width and/or a filter radius of a non-progressive blurring region in the blurring hierarchy.
32. The apparatus according to any of the claims 28 to 31, wherein the image to be processed further comprises at least one region of interest, each region of interest is surrounded by at least one blurring layer, and the at least one blurring layer surrounding one region of interest is distributed in turn from inside to outside, wherein in each blurring layer the non-progressive blurring region surrounds the progressive blurring region.
33. The apparatus of claim 32, wherein the boundaries of the region of interest comprise lines that are not parallel and/or perpendicular to horizontal boundaries of the image to be processed.
34. The apparatus according to claim 32 or 33, wherein in at least one blurring hierarchy surrounding a region of interest, the filtering radii corresponding to non-progressive blurring regions increase sequentially from inside to outside.
35. The apparatus of claim 34, wherein in the at least one blurring hierarchy that surrounds a region of interest, the non-progressive blurring regions have respective filter radii that are continuous positive integers.
36. The apparatus of any of claims 28 to 35, wherein a number of the at least one blurring layer is determined by a transmission bandwidth.
37. The apparatus according to any one of claims 24 to 36, wherein the filtering manner for the non-progressive blurring region in the image to be processed is mean filtering or gaussian filtering.
38. The apparatus of any of claims 24 to 37, wherein the image data of the image to be processed comprises image data of one or more image channels;
the computer program, when executed by the processor, causes the apparatus to further perform the steps of:
for each image channel in the one or more image channels, filtering the value of each pixel point based on the corresponding filtering kernel of the region to which each pixel point belongs to obtain filtered image data corresponding to each image channel;
and carrying out fusion processing on the filtered image data of the one or more image channels to obtain a virtual image.
39. The apparatus of claim 38, wherein the one or more image channels comprise a Y channel, a U channel, and a V channel, and the image data of the image to be processed comprises image data of the Y channel, the U channel, and the V channel.
40. The apparatus of claim 39, wherein a number of pixel points of the Y-channel is different from a number of pixel points of the U-channel or a number of pixel points of the V-channel.
41. The apparatus according to claim 40, wherein the number of pixel points of the Y channel in the image to be processed is twice the number of pixel points of the U channel or the V channel.
42. The apparatus of claim 41, wherein a filtering radius of a same pixel point to be processed in the image to be processed in the U channel or the V channel is 1/2 of a filtering radius of a same pixel point to be processed in the Y channel.
43. An apparatus according to any one of claims 24 to 42, wherein the computer program when executed by the processor causes the apparatus to further perform the steps of:
and expanding the image to be processed according to the filtering radius corresponding to the region to which the pixel point at the boundary in the image to be processed belongs.
44. The apparatus of claim 43, wherein the computer program, when executed by the processor, causes the apparatus to further perform the steps of:
and expanding the image to be processed by taking each boundary in the image to be processed as a symmetry axis and taking the filtering radius corresponding to the region to which the pixel points on the boundary belong as an expansion width.
45. An apparatus, according to claim 43 or 44, wherein said computer program, when executed by said processor, causes the apparatus to perform the further steps of:
and when the image to be processed comprises image data of a plurality of channels, expanding the image to be processed according to the filtering radius corresponding to the region to which the pixel point at the boundary belongs for each channel of the plurality of channels.
46. The apparatus of claim 45, wherein the extension widths of the image data for the plurality of channels are different from each other.
47. The apparatus of any one of claims 24 to 46, wherein the apparatus is integrated on a chip.
48. A computer-readable storage medium, having stored thereon a computer program which, when executed, implements the method of any one of claims 1 to 23.
49. A computer program product comprising instructions which, when executed by a computer, cause the computer to perform the method of any one of claims 1 to 23.
CN201980039065.8A 2019-11-26 2019-11-26 Image processing method and device Pending CN112334942A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/121048 WO2021102704A1 (en) 2019-11-26 2019-11-26 Image processing method and apparatus

Publications (1)

Publication Number Publication Date
CN112334942A true CN112334942A (en) 2021-02-05

Family

ID=74319807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980039065.8A Pending CN112334942A (en) 2019-11-26 2019-11-26 Image processing method and device

Country Status (2)

Country Link
CN (1) CN112334942A (en)
WO (1) WO2021102704A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967205A (en) * 2021-03-25 2021-06-15 苏州天准科技股份有限公司 Gray code filter-based outlier correction method, storage medium, and system
WO2023245362A1 (en) * 2022-06-20 2023-12-28 北京小米移动软件有限公司 Image processing method and apparatus, electronic device, and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4889538B2 (en) * 2007-03-27 2012-03-07 三洋電機株式会社 Image processing device
CN101587586B (en) * 2008-05-20 2013-07-24 株式会社理光 Device and method for processing images
CN102170552A (en) * 2010-02-25 2011-08-31 株式会社理光 Video conference system and processing method used therein
CN104751407B (en) * 2015-03-11 2019-01-25 百度在线网络技术(北京)有限公司 A kind of method and apparatus for being blurred to image
CN104751405B (en) * 2015-03-11 2018-11-13 百度在线网络技术(北京)有限公司 A kind of method and apparatus for being blurred to image
CN108665494A (en) * 2017-03-27 2018-10-16 北京中科视维文化科技有限公司 Depth of field real-time rendering method based on quick guiding filtering
CN107749046B (en) * 2017-10-27 2020-02-07 维沃移动通信有限公司 Image processing method and mobile terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967205A (en) * 2021-03-25 2021-06-15 苏州天准科技股份有限公司 Gray code filter-based outlier correction method, storage medium, and system
WO2023245362A1 (en) * 2022-06-20 2023-12-28 北京小米移动软件有限公司 Image processing method and apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
WO2021102704A1 (en) 2021-06-03

Similar Documents

Publication Publication Date Title
Galdran Image dehazing by artificial multiple-exposure image fusion
CN108205804B (en) Image processing method and device and electronic equipment
Thanh et al. An adaptive method for image restoration based on high-order total variation and inverse gradient
CN111275626B (en) Video deblurring method, device and equipment based on ambiguity
US10424054B2 (en) Low-illumination image processing method and device
US9142009B2 (en) Patch-based, locally content-adaptive image and video sharpening
CN110706174B (en) Image enhancement method, terminal equipment and storage medium
US8965141B2 (en) Image filtering based on structural information
JP4862897B2 (en) Image processing method
CN109214996B (en) Image processing method and device
US20190340738A1 (en) Method and device for image processing
US9613405B2 (en) Scalable massive parallelization of overlapping patch aggregation
EP3438923B1 (en) Image processing apparatus and image processing method
CN111260580A (en) Image denoising method based on image pyramid, computer device and computer readable storage medium
CN111968057A (en) Image noise reduction method and device, storage medium and electronic device
CN112334942A (en) Image processing method and device
US9305338B1 (en) Image detail enhancement and edge sharpening without overshooting
Zhu et al. Low-light image enhancement network with decomposition and adaptive information fusion
CN114092407A (en) Method and device for processing video conference shared document in clear mode
CN112200719B (en) Image processing method, electronic device, and readable storage medium
CN113744294A (en) Image processing method and related device
Kang et al. Simultaneous image enhancement and restoration with non-convex total variation
CN116468636A (en) Low-illumination enhancement method, device, electronic equipment and readable storage medium
CN112313700A (en) Image processing method and device
CN111986095B (en) Image processing method and image processing device based on edge extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination