CN113592776A - Image processing method and device, electronic device and storage medium - Google Patents

Image processing method and device, electronic device and storage medium Download PDF

Info

Publication number
CN113592776A
CN113592776A CN202110736622.0A CN202110736622A CN113592776A CN 113592776 A CN113592776 A CN 113592776A CN 202110736622 A CN202110736622 A CN 202110736622A CN 113592776 A CN113592776 A CN 113592776A
Authority
CN
China
Prior art keywords
image
processed
sharpening
texture
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110736622.0A
Other languages
Chinese (zh)
Inventor
贾澜鹏
饶青
王光甫
蒋霆
刘帅成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Kuangshi Jinzhi Technology Co ltd
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Original Assignee
Chengdu Kuangshi Jinzhi Technology Co ltd
Beijing Kuangshi Technology Co Ltd
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Kuangshi Jinzhi Technology Co ltd, Beijing Kuangshi Technology Co Ltd, Beijing Megvii Technology Co Ltd filed Critical Chengdu Kuangshi Jinzhi Technology Co ltd
Priority to CN202110736622.0A priority Critical patent/CN113592776A/en
Publication of CN113592776A publication Critical patent/CN113592776A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The application provides an image processing method and device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring an image to be processed; carrying out edge detection on an image to be processed to generate an edge image; dividing the image to be processed into texture regions of various categories according to the gradient direction and the gradient strength of the edge map; and sharpening the texture regions of different types according to different sharpening strengths to obtain a clear image. The technical scheme provided by the application can adopt different sharpening strengths to sharpen different texture regions, can weaken sharpening in a noise region, ensures that noise cannot be sharpened and amplified, and can strengthen sharpening in a weak edge region to make the edge more obvious, thereby making the image clearer.

Description

Image processing method and device, electronic device and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
The super-resolution problem of images is always one of the important issues of concern in the field of computer vision, and it is mainly desired to obtain a high-resolution image from a low-resolution image through processing of an algorithm. In recent years, many researchers have attempted to achieve this goal using deep learning, such as: "Multi-scale residual Network for Image Super-Resolution" obtains a Super-Resolution effect by using a Multi-scale residual Network, "left Texture transform Network for Image Super-Resolution", and compensates Texture information by using other high-Resolution pictures as reference frames.
In practical applications, since the images contain a large variety of targets, and the training samples cannot cover all the targets, the trained neural network model cannot improve the definition of all the images.
Disclosure of Invention
The embodiment of the application provides an image processing method which is used for improving the definition of an image.
The embodiment of the application provides an image processing method, which comprises the following steps:
acquiring an image to be processed;
carrying out edge detection on the image to be processed to generate an edge map;
dividing the image to be processed into texture regions of various categories according to the gradient direction and the gradient strength of the edge map;
and sharpening the texture regions of different types according to different sharpening strengths to obtain a clear image.
In an embodiment, the different classes of texture regions include one or more of a repetitive texture region, a noise region, a strong edge region, and a weak edge region.
In an embodiment, the dividing the image to be processed into texture regions of various categories according to the gradient direction and the gradient strength of the edge map includes:
calculating the local variance of the gradient direction in the unit region and the local mean value of the gradient strength in the unit region according to the gradient direction and the gradient strength of each pixel point in the unit region;
and dividing the image to be processed into texture regions of various categories according to the local variance of the gradient direction and the local mean of the gradient strength in each unit region.
In an embodiment, the dividing the image to be processed into texture regions of various categories according to the local variance of the gradient direction and the local mean of the gradient strength in each unit region includes one or more of the following steps:
marking unit areas, in the image to be processed, of which the local variance is greater than a first threshold and the local mean is greater than a second threshold as repeated texture areas;
marking unit areas with the local variance larger than a first threshold and the local mean smaller than or equal to a second threshold as noise areas;
marking unit areas with the local variance smaller than or equal to a first threshold and the local mean larger than a second threshold as strong edge areas;
and marking the unit area with the local variance smaller than or equal to a first threshold and the local mean smaller than or equal to a second threshold as a weak edge area.
In an embodiment, before performing sharpening processing on different classes of texture regions with different sharpening strengths to obtain a sharpened image, the method further includes:
when the image to be processed contains a repeated texture region, replacing the repeated texture region by using a reference image to obtain an updated repeated texture region; the reference image refers to a repeated texture region in a reference frame, and the reference frame refers to a frame of image with definition greater than that of the image to be processed.
In an embodiment, the sharpening process performed by different sharpening strengths for different classes of texture regions includes:
and sharpening the texture regions of each category in the image to be processed according to the corresponding sharpening parameters according to the sharpening parameters corresponding to the texture regions of different categories.
In an embodiment, the acquiring the to-be-processed image includes:
acquiring a plurality of frames of shot images;
selecting a reference frame from the multiple shot images, wherein the rest shot images are target frames;
aligning a plurality of the target frames based on the reference frame;
and fusing the aligned target frames and the reference frame to obtain the image to be processed.
In one embodiment, aligning a plurality of the target frames based on the reference frame comprises:
for each target frame, transforming the target frame according to the homography matrix mapped to the reference frame by the target frame; wherein the scaling factor of the homography matrix is an upsampling multiple.
In an embodiment, the fusing the aligned target frame and the reference frame to obtain the image to be processed includes:
calculating pixel difference between the aligned target frame and the reference frame pixel by pixel;
comparing the pixel difference with a threshold value, and determining a fusion area in the target frame, wherein the pixel difference is smaller than the threshold value;
generating a corresponding fusion weight for each pixel point according to the pixel difference corresponding to each pixel point in the fusion area;
and for each pixel point, fusing the reference frame and the fusion area according to the corresponding fusion weight to obtain the image to be processed.
In an embodiment, the fusing the aligned target frames and the reference frame to obtain the image to be processed includes:
fusing the aligned target frames with the reference frame to obtain a first intermediate image;
judging whether the sensitivity of the first intermediate image is greater than a standard value;
and if the sensitivity is greater than a standard value, performing single-frame noise reduction on the first intermediate image to obtain the image to be processed.
The present application also provides an image processing apparatus including:
the image acquisition module is used for acquiring an image to be processed;
the edge detection module is used for carrying out edge detection on the image to be processed to generate an edge image;
the region dividing module is used for dividing the image to be processed into texture regions of various categories according to the gradient direction and the gradient strength of the edge map;
and the partition sharpening module is used for sharpening different types of texture regions according to different sharpening strengths to obtain a sharpened image.
The present application also provides an electronic device, which includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the image processing method described above.
The present application also provides a computer-readable storage medium storing a computer program executable by a processor to perform the above-described image processing method.
According to the technical scheme provided by the embodiment of the application, the edge detection is carried out on the image to be processed to generate the edge image, the image to be processed is divided into the texture regions of various categories according to the gradient direction and the gradient strength of the edge image, then different sharpening strengths can be adopted for different texture regions to carry out sharpening processing, the noise region can be weakened for sharpening, the noise is guaranteed not to be sharpened and amplified, the weak edge region can be strengthened for sharpening, the edge is more obvious, and therefore the image is clearer.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 3 is an edge diagram illustration of different target parameters provided by an embodiment of the present application;
FIG. 4 is a detailed flowchart of step S230 in the corresponding embodiment of FIG. 2;
fig. 5 is a schematic flowchart of an image processing method according to another embodiment of the present application;
FIG. 6 is a diagram illustrating sharpening parameters for different classes of texture regions;
FIG. 7 is a flowchart illustrating an image processing method according to another embodiment of the present application;
fig. 8 is a block diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Fig. 1 is a schematic structural diagram of an electronic device provided in an embodiment of the present application. The electronic device 100 may be configured to execute the image processing method provided by the embodiment of the present application. As shown in fig. 1, the electronic device 100 includes: one or more processors 102, and one or more memories 104 storing processor-executable instructions. Wherein the processor 102 is configured to execute an image processing method provided by the following embodiments of the present application.
The processor 102 may be a gateway, or may be an intelligent terminal, or may be a device including a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), or other form of processing unit having data processing capability and/or instruction execution capability, and may process data of other components in the electronic device 100, and may control other components in the electronic device 100 to perform desired functions.
The memory 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by the processor 102 to implement the image processing methods described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
In one embodiment, the electronic device 100 shown in FIG. 1 may also include an input device 106, an output device 108, and a data acquisition device 110, which are interconnected via a bus system 112 and/or other form of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device 100 may have other components and structures as desired.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like. The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like. The data acquisition device 110 may acquire an image of a subject and store the acquired image in the memory 104 for use by other components. Illustratively, the data acquisition device 110 may be a camera.
In an embodiment, the devices in the example electronic device 100 for implementing the image processing method of the embodiment of the present application may be integrally disposed, or may be disposed in a decentralized manner, such as integrally disposing the processor 102, the memory 104, the input device 106 and the output device 108, and disposing the data acquisition device 110 separately.
In an embodiment, the example electronic device 100 for implementing the image processing method of the embodiment of the present application may be implemented as a smart terminal such as a smart phone, a tablet computer, a smart watch, an in-vehicle device, and the like.
Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present application. As shown in fig. 2, the method includes the following steps S210 to S240.
Step S210: and acquiring an image to be processed.
The image to be processed may be a shot original image or an image obtained by preprocessing the original image. The preprocessing process can be referred to as follows, and is not described in detail here. The electronic equipment with the camera can obtain the image to be processed after preprocessing the shot original image. The image to be processed can also be directly acquired from an external device or stored locally in advance.
Step S220: and carrying out edge detection on the image to be processed to generate an edge map.
By edge is meant the collection of pixels whose surrounding pixels have a sharp change in gray level, which is the most fundamental feature of an image. Edges exist between the object, background and region. The edge detection can adopt the existing edge detection method, the main tool of the edge detection is an edge detection template, and the commonly used edge detection templates comprise a DOG (Gaussian difference edge detection) operator, a Roberts (Roberts) operator, a Sobel (Sobel) operator, a Prewitt operator and the like.
The edge map is an image obtained by extracting an edge of the image to be processed and is used for indicating a point with a sharp gray level change in the image to be processed.
In an embodiment, edge detection of the image to be processed may be performed using a DOG operator. As shown in fig. 3, the σ combination of (a) is (1.1 ) and the σ combination of (B) is (0.3, 0.4), it can be found that a large σ combination will filter out much of the finely divided noise, and a small σ combination will retain more detail. Therefore, in an embodiment, the target parameters (i.e., σ combinations) of the DOG operators can be determined by a parameter search method, and the edge detection is performed on the image to be processed based on the target parameters of the DOG operators to generate an edge map. The target parameter may be considered a sigma parameter that may filter out much of the finely divided noise, but may retain more detail.
Step S230: and dividing the image to be processed into texture regions of various categories according to the gradient direction and the gradient strength of the edge map.
The gradient direction of each pixel point is vertical to the direction of the edge, and the gradient direction can be represented by an angle value relative to the positive direction of the x axis; the gradient strength of each pixel point may be the gray value of the pixel point in the edge map. According to the gradient direction and the gradient strength of each pixel point in the edge map, the following division can be made:
TABLE 1 texture region Category Table
Type (B) Directionality of electricity Strength of
Repetitive texture (e.g. grass, pattern) Is free of High strength
Strong edge (straight line) Is provided with High strength
Weak edge (straight line) Is provided with Weak (weak)
Noise(s) Is free of Weak (weak)
The different classes of texture regions include one or more of repetitive texture regions, noise regions, strong edge regions, and weak edge regions.
The division mainly aims to divide the noise area, and mainly utilizes the nondirectivity of the noise, specifically, the noise generally has two characteristics, one is lower intensity, and two local gradient directions are disordered (larger variance), and the noise and the repeated texture area can be distinguished by utilizing the two characteristics. Meanwhile, only the noise area is a flat area, the weak edge area is usually low in response strength of the gradient, but the local gradient direction is mainly directional (the variance is small), so that the noise area and the weak edge area can be distinguished.
Accordingly, in an embodiment, as shown in fig. 4, the step S230 specifically includes the following steps S231 to S232.
Step S231: and calculating the local variance of the gradient direction in the unit region and the local mean value of the gradient strength in the unit region according to the gradient direction and the gradient strength of each pixel point in the unit region.
The unit area may be a plurality of pixels, such as a 3 × 3 pixel area. The local variance refers to a variance value of the gradient direction in a unit area. Local mean means the mean of the gradient strength within a unit area.
Step S232: and dividing the image to be processed into texture regions of various categories according to the local variance of the gradient direction and the local mean of the gradient strength in each unit region.
Since the noise region and the repeated texture region are non-directional (i.e., out of order), the unit region having a large local variance may be considered to be a noise region or a repeated texture region. Since the gradient strength of the noise region is weak, if the average value of the gradient strength of the unit region is small, it can be determined that the unit region is the noise region, not the repeated texture region.
In an embodiment, respective thresholds may be set for the local variance and the local mean. Marking unit areas with local variance larger than a first threshold value and local mean larger than a second threshold value in the image to be processed as repeated texture areas; marking unit areas with local variance larger than a first threshold and local mean smaller than or equal to a second threshold as noise areas; marking unit areas with local variance smaller than or equal to a first threshold and local mean larger than a second threshold as strong edge areas; and marking the unit area with the local variance smaller than or equal to the first threshold and the local mean smaller than or equal to the second threshold as a weak edge area.
For example, a texture mask image may be generated, with different colors representing different texture regions. Red represents strong edge regions, green represents repeat texture regions, blue represents weak edge regions, and black represents noise regions.
Step S240: and sharpening the texture regions of different types according to different sharpening strengths to obtain a clear image.
The sharpening process is to compensate the outline of the image, enhance the edge of the image and the part of the gray level jump, make the image clear, and is divided into two types, namely spatial domain process and frequency domain process. Image sharpening is to highlight edges, contours, or features of some linear target elements of a feature on an image. This filtering method improves the contrast between the feature edges and the surrounding picture elements and is therefore also referred to as edge enhancement.
The sharpening strength refers to the magnitude of the sharpening weight. The edge is more obvious when the sharpening strength is high. And sharpening with different strengths in different regions can reduce noise and simultaneously retain more details. The sharpened image refers to an image obtained by sharpening an image to be processed, and is called as a sharpened image for distinguishing.
Specifically, a weaker sharpening may be used in the noisy region to reduce the negative effect of sharpening the flat region that amplifies the noise to appear more cluttered. Weak edge areas require a relatively large sharpening to ensure that these non-salient edges appear sharper, making the picture look more detailed from a visual perspective.
According to the technical scheme provided by the embodiment of the application, the edge detection is carried out on the image to be processed to generate the edge image, the image to be processed is divided into the texture regions of various categories according to the gradient direction and the gradient strength of the edge image, then the sharpening processing can be carried out on different texture regions by adopting different sharpening strengths, the sharpening can be weakened in the noise region, the noise is ensured not to be sharpened and amplified, the sharpening can be strengthened in the weak edge region, the edge is more obvious, and therefore the image is clearer.
In an embodiment, as shown in fig. 5, after the step S230 and before the step 240, the method provided in the embodiment of the present application further includes a step S501.
Step S501: and when the image to be processed contains a repeated texture region, replacing the repeated texture region by using a reference image to obtain an updated repeated texture region.
The reference image refers to a repeated texture region in a reference frame. The reference frame can be regarded as an image with the highest definition which is shot simultaneously with the image to be processed, namely the definition of the reference frame is higher than that of the image to be processed.
And replacing the repeated texture area of the image to be processed with a reference image, ensuring that more information can be reserved in the image before sharpening, compensating the influence caused by noise reduction, and reducing black and white edges caused by sharpening on the detail loss image.
In an embodiment, the step S240 specifically includes: and sharpening the texture region of each category in the image to be processed according to the corresponding sharpening parameter according to the sharpening parameters correspondingly configured for the texture regions of different categories.
FIG. 6 is a diagram illustrating sharpening parameters for different classes of texture regions. The vertical axis represents the sharpening weight, and the horizontal axis represents the gradation value. As shown in fig. 6, 601 denotes a sharpening parameter for a noise region, 602 denotes a sharpening parameter for a weak edge region, 603 denotes a sharpening parameter for a repeated texture region, and 604 denotes a sharpening parameter for a strong edge region. The sharpening parameters for different classes of texture regions may be configured in advance and stored in the electronic device.
Therefore, the noise area in the image to be processed can be sharpened based on the sharpening parameter 601. The sharpening process can be performed on the weak edge region in the image to be processed based on the sharpening parameter of 602. The repeated texture area in the image to be processed can be sharpened based on the sharpening parameter of 603, and the strong edge area in the image to be processed can be sharpened based on the sharpening parameter of 604, so that the noise area adopts weaker sharpening strength, thereby avoiding sharpening amplified noise. The weak edge area and the repeated texture area adopt stronger sharpening strength, so that the boundary can be clearer, and the final image can be clearer.
Fig. 7 is a flowchart illustrating an image processing method according to another embodiment of the present application, and as shown in fig. 7, the method may include the following steps:
firstly, frame selection for definition: acquiring multiple frames of shot images, selecting reference frames from the multiple frames of shot images, and taking the rest shot images as target frames;
the reference frame can be an image with the highest overall image sobel response in the multi-frame shooting images. The image with the highest overall sobel response can be regarded as the shot image with the largest average value of the overall gradient strength. For the purpose of discrimination, the other images than the reference frame in the multi-frame captured image may be referred to as target frames.
Aligning: aligning the plurality of target frames based on the reference frame.
The alignment algorithm may use a classic ECC algorithm, a single homographic matrix (homography matrix) is used to complete image alignment, a target frame is transformed based on the homography matrix, and in order to prevent precision loss caused by multiple upsampling, a scale coefficient of the homography matrix used for image alignment may be adjusted in the process of image alignment, that is, a scaling factor is adjusted to a desired upsampling multiple. Therefore, in the interpolation process required by alignment calculation, upsampling (bicubic) is directly carried out, and the precision loss caused by multiple times of interpolation of one-time alignment interpolation and subsequent upsampling interpolation is avoided.
③ multi-frame fusion: and fusing the aligned target frames and the reference frame to obtain the image to be processed.
And fusing the aligned multi-frame shot images to obtain a frame of image to be processed. For example, the gray average value of the aligned multiple frames of shot images can be calculated one by one as the gray value of the pixel point, so that a new image is obtained and used as the image to be processed.
As shown in fig. 7, in the multi-frame fusion process, a ghost removing step can be further combined.
In one embodiment, the pixel difference between the aligned target frame and the reference frame may be calculated pixel by pixel; and comparing the pixel difference with a preset threshold value, and determining a fusion area in the target frame, wherein the pixel difference is smaller than the threshold value.
Specifically, by calculating the gray level difference (i.e., pixel difference) between each pixel point by pixel point between each target frame and each reference frame, and comparing the difference value with a set threshold value, the areas with larger difference between the target frame and the reference frame can be found, and the areas are not subjected to multi-frame fusion, so that the generation of double images is avoided. And the region of the target frame with the pixel difference smaller than the threshold value can be called a fusion region, and multi-frame fusion can be performed.
Then generating corresponding fusion weight for each pixel point according to the pixel difference corresponding to each pixel point in the fusion area; and for each pixel point, fusing the reference frame and the fusion area according to the corresponding fusion weight to obtain the image to be processed.
In the process of multi-frame fusion, the difference values can be mapped by utilizing Gaussian weights according to the pixel difference between the target frame and the reference frame, so that the fusion proportion of pixel points with larger difference with the reference frame is smaller, and the fusion weight of pixel points which are more similar to the reference frame is larger. That is, the corresponding fusion weight of the pixel point with larger pixel difference is larger, and the fusion weight of the pixel point with smaller pixel difference is smaller.
And then, carrying out gray value weighted addition on the fusion area of the reference frame and each target frame one by one according to the corresponding fusion weight to obtain a new image. In one embodiment, the new image may be used as the image to be processed.
Therefore, the moving object is ensured not to generate double images in the final result, and for the repeated texture area which is difficult to process in the super-division task, the details and sub-pixels in each frame can be reserved through multi-frame fusion, and the effect of noise reduction can be achieved.
In an embodiment, for each pixel point, the reference frame and the fusion region are fused according to the corresponding fusion weight, so as to obtain a first intermediate image. Namely, the multi-frame fused image can be used as the image to be processed after further preprocessing. Therefore, for the purpose of distinguishing, the image obtained after multi-frame fusion and de-ghosting can be called a first intermediate image. As shown in fig. 7, it may be determined whether the sensitivity of the first intermediate image is greater than a standard value. Since image fusion does not affect the sensitivity of an image, the sensitivity of the first intermediate image is the sensitivity (ISO) of the camera when capturing multiple frames of captured images. If the sensitivity is larger than a standard value, conducting single-frame noise reduction on the first intermediate image to obtain the image to be processed. The single frame denoising may adopt an existing denoising method, such as mean filter denoising, wavelet denoising, and the like. In contrast, if the sensitivity is equal to or less than the standard value, the first intermediate image may be taken as the image to be processed, and the above-described steps S210 to S240 are performed.
In one embodiment, as shown in fig. 7, the texture intensity mask map may be generated by first generating the to-be-processed image. I.e. the image to be processed is divided into a plurality of types of texture regions. And 8, pasting the reference frame to the repeated texture area. And ninthly, adopting different sharpening strengths to different texture areas for sharpening.
In one embodiment, as shown in FIG. 7, the sharpened image obtained after the sharpening process may be further contrast stretched (in R) to obtain a second sharpened image. Performing saturation adjustment on the second sharp image
Figure BDA0003141790320000151
A third sharp image is obtained.
For the distinction, an image obtained by performing the sharpening process in the divided regions is referred to as a sharpened image, an image obtained by performing the contrast stretching on the sharpened image is referred to as a second sharpened image, and an image obtained by performing the saturation adjustment on the second sharpened image is referred to as a third sharpened image. The contrast stretching can adopt a classic CLAHE algorithm, the saturation adjustment is mainly performed on the basis of a Y (brightness, namely a gray-scale value) channel of a YUV image in the whole image processing process, the intensity of the Y channel can be changed in the process, and further the proportional relation between Y and UV (chroma) is influenced, so that the saturation becomes low.
The following are embodiments of the apparatus of the present application that may be used to perform the above-described embodiments of the image processing method of the present application. For details that are not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the image processing method of the present application.
Fig. 8 is a block diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 8, the apparatus includes: an image acquisition module 810, an edge detection module 820, a region partitioning module 830, and a region sharpening module 840.
An image obtaining module 810, configured to obtain an image to be processed;
an edge detection module 820, configured to perform edge detection on the image to be processed to generate an edge map;
the region dividing module 830 is configured to divide the image to be processed into texture regions of multiple categories according to the gradient direction and the gradient strength of the edge map;
and the partition sharpening module 840 is configured to perform sharpening processing on different types of texture regions according to different sharpening strengths to obtain a sharpened image.
The implementation process of the functions and actions of each module in the above device is specifically described in the implementation process of the corresponding step in the above image processing method, and is not described herein again.
In an embodiment, the different classes of texture regions include one or more of a repetitive texture region, a noise region, a strong edge region, and a weak edge region.
In one embodiment, the region dividing module 830 includes:
the calculation unit is used for calculating the local variance of the gradient direction in the unit region and the local mean value of the gradient strength in the unit region according to the gradient direction and the gradient strength of each pixel point in the unit region;
and the region dividing unit is used for dividing the image to be processed into texture regions of various categories according to the local variance of the gradient direction and the local mean value of the gradient strength in each unit region.
In an embodiment, the area dividing unit is specifically configured to perform one or more of the following steps:
marking unit areas, in the image to be processed, of which the local variance is greater than a first threshold and the local mean is greater than a second threshold as repeated texture areas;
marking unit areas with the local variance larger than a first threshold and the local mean smaller than or equal to a second threshold as noise areas;
marking unit areas with the local variance smaller than or equal to a first threshold and the local mean larger than a second threshold as strong edge areas;
and marking the unit area with the local variance smaller than or equal to a first threshold and the local mean smaller than or equal to a second threshold as a weak edge area.
In an embodiment, the apparatus further includes a texture replacement module, configured to replace, when the image to be processed includes a repeated texture region, the repeated texture region with a reference image, so as to obtain an updated repeated texture region. The reference image refers to a repeated texture region in a reference frame, and the reference frame refers to a frame of image with definition greater than that of the image to be processed.
In an embodiment, the partition sharpening module 840 is specifically configured to: and sharpening the texture regions of each category in the image to be processed according to the corresponding sharpening parameters according to the sharpening parameters corresponding to the texture regions of different categories.
In one embodiment, the image acquisition module 810 comprises:
the image acquisition unit is used for acquiring multi-frame shooting images;
the frame selection unit is used for selecting a reference frame from the multi-frame shot images, and the rest shot images are target frames;
an alignment unit, configured to align the plurality of target frames based on the reference frame;
and the fusion unit is used for fusing the aligned target frames and the reference frame to obtain the image to be processed.
In an embodiment, the alignment unit is specifically configured to: for each target frame, transforming the target frame according to the homography matrix mapped to the reference frame by the target frame; wherein the scaling factor of the homography matrix is an upsampling multiple.
In an embodiment, the fusion unit is specifically configured to: calculating pixel difference between the aligned target frame and the reference frame pixel by pixel;
comparing the pixel difference with a threshold value, and determining a fusion area in the target frame, wherein the pixel difference is smaller than the threshold value;
generating a corresponding fusion weight for each pixel point according to the pixel difference corresponding to each pixel point in the fusion area;
and for each pixel point, fusing the reference frame and the fusion area according to the corresponding fusion weight to obtain the image to be processed.
In an embodiment, the fusion unit is specifically configured to:
fusing the aligned target frames with the reference frame to obtain a first intermediate image;
judging whether the sensitivity of the first intermediate image is greater than a standard value;
and if the sensitivity is greater than a standard value, performing single-frame noise reduction on the first intermediate image to obtain the image to be processed.
In one embodiment, the apparatus further comprises:
and the contrast stretching module is used for performing contrast stretching on the sharpened image to obtain a second clear image.
In one embodiment, the apparatus further comprises:
and the saturation adjusting module is used for adjusting the saturation of the second clear image to obtain a third clear image.
In the embodiments provided in the present application, the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (13)

1. An image processing method, comprising:
acquiring an image to be processed;
carrying out edge detection on the image to be processed to generate an edge map;
dividing the image to be processed into texture regions of various categories according to the gradient direction and the gradient strength of the edge map;
and sharpening the texture regions of different types according to different sharpening strengths to obtain a clear image.
2. The method of claim 1, wherein the different classes of texture regions comprise one or more of repetitive texture regions, noise regions, strong edge regions, and weak edge regions.
3. The method according to claim 2, wherein the dividing the image to be processed into texture regions of various categories according to the gradient direction and gradient strength of the edge map comprises:
calculating the local variance of the gradient direction in the unit region and the local mean value of the gradient strength in the unit region according to the gradient direction and the gradient strength of each pixel point in the unit region;
and dividing the image to be processed into texture regions of various categories according to the local variance of the gradient direction and the local mean of the gradient strength in each unit region.
4. The method according to claim 3, wherein the step of dividing the image to be processed into texture regions of various categories according to the local variance of the gradient direction and the local mean of the gradient strength in each unit region comprises one or more of the following steps:
marking unit areas, in the image to be processed, of which the local variance is greater than a first threshold and the local mean is greater than a second threshold as repeated texture areas;
marking unit areas with the local variance larger than a first threshold and the local mean smaller than or equal to a second threshold as noise areas;
marking unit areas with the local variance smaller than or equal to a first threshold and the local mean larger than a second threshold as strong edge areas;
and marking the unit area with the local variance smaller than or equal to a first threshold and the local mean smaller than or equal to a second threshold as a weak edge area.
5. The method according to any one of claims 2 to 4, wherein before sharpening the different classes of texture regions with different sharpening strengths to obtain a sharpened image, the method further comprises:
when the image to be processed contains a repeated texture region, replacing the repeated texture region by using a reference image to obtain an updated repeated texture region; the reference image refers to a repeated texture region in a reference frame, and the reference frame refers to a frame of image with definition greater than that of the image to be processed.
6. The method according to any one of claims 1 to 5, wherein the sharpening process is performed on different classes of texture regions by different sharpening strengths, and comprises the following steps:
and sharpening the texture regions of each category in the image to be processed according to the corresponding sharpening parameters according to the sharpening parameters corresponding to the texture regions of different categories.
7. The method according to any one of claims 1-6, wherein said acquiring the image to be processed comprises:
acquiring a plurality of frames of shot images;
selecting a reference frame from the multiple shot images, wherein the rest shot images are target frames;
aligning a plurality of the target frames based on the reference frame;
and fusing the aligned target frames and the reference frame to obtain the image to be processed.
8. The method of claim 7, wherein aligning the plurality of target frames based on the reference frame comprises:
for each target frame, transforming the target frame according to the homography matrix mapped to the reference frame by the target frame; wherein the scaling factor of the homography matrix is an upsampling multiple.
9. The method according to claim 7 or 8, wherein the fusing the aligned target frame and the reference frame to obtain the image to be processed comprises:
calculating pixel difference between the aligned target frame and the reference frame pixel by pixel;
comparing the pixel difference with a threshold value, and determining a fusion area in the target frame, wherein the pixel difference is smaller than the threshold value;
generating a corresponding fusion weight for each pixel point according to the pixel difference corresponding to each pixel point in the fusion area;
and for each pixel point, fusing the reference frame and the fusion area according to the corresponding fusion weight to obtain the image to be processed.
10. The method according to any one of claims 7 to 9, wherein the fusing the aligned plurality of target frames with the reference frame to obtain the image to be processed comprises:
fusing the aligned target frames with the reference frame to obtain a first intermediate image;
judging whether the sensitivity of the first intermediate image is greater than a standard value;
and if the sensitivity is greater than a standard value, performing single-frame noise reduction on the first intermediate image to obtain the image to be processed.
11. An image processing apparatus characterized by comprising:
the image acquisition module is used for acquiring an image to be processed;
the edge detection module is used for carrying out edge detection on the image to be processed to generate an edge image;
the region dividing module is used for dividing the image to be processed into texture regions of various categories according to the gradient direction and the gradient strength of the edge map;
and the partition sharpening module is used for sharpening different types of texture regions according to different sharpening strengths to obtain a sharpened image.
12. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the image processing method of any one of claims 1-10.
13. A computer-readable storage medium, characterized in that the storage medium stores a computer program executable by a processor to perform the image processing method of any one of claims 1 to 10.
CN202110736622.0A 2021-06-30 2021-06-30 Image processing method and device, electronic device and storage medium Pending CN113592776A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110736622.0A CN113592776A (en) 2021-06-30 2021-06-30 Image processing method and device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110736622.0A CN113592776A (en) 2021-06-30 2021-06-30 Image processing method and device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN113592776A true CN113592776A (en) 2021-11-02

Family

ID=78245411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110736622.0A Pending CN113592776A (en) 2021-06-30 2021-06-30 Image processing method and device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113592776A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821030A (en) * 2022-04-11 2022-07-29 苏州振旺光电有限公司 Planet image processing method, system and device
CN116245770A (en) * 2023-03-22 2023-06-09 新光维医疗科技(苏州)股份有限公司 Endoscope image edge sharpness enhancement method and device, electronic equipment and storage medium
CN116527922A (en) * 2023-07-03 2023-08-01 浙江大华技术股份有限公司 Image coding method and related device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5081692A (en) * 1991-04-04 1992-01-14 Eastman Kodak Company Unsharp masking using center weighted local variance for image sharpening and noise suppression
CN101123677A (en) * 2006-08-11 2008-02-13 松下电器产业株式会社 Method, device and integrated circuit for improving image acuteness
US20150178946A1 (en) * 2013-12-19 2015-06-25 Google Inc. Image adjustment using texture mask
CN105046658A (en) * 2015-06-26 2015-11-11 北京大学深圳研究生院 Low-illumination image processing method and device
US20150348234A1 (en) * 2014-05-30 2015-12-03 National Chiao Tung University Method for image enhancement, image processing apparatus and computer readable medium using the same
CN106604057A (en) * 2016-12-07 2017-04-26 乐视控股(北京)有限公司 Video processing method and apparatus thereof
CN109242811A (en) * 2018-08-16 2019-01-18 广州视源电子科技股份有限公司 A kind of image alignment method and device thereof, computer readable storage medium and computer equipment
CN110189349A (en) * 2019-06-03 2019-08-30 湖南国科微电子股份有限公司 Image processing method and device
CN110766634A (en) * 2019-10-23 2020-02-07 华中科技大学鄂州工业技术研究院 Image division method and device based on guide filter and electronic equipment
US20200265577A1 (en) * 2019-02-14 2020-08-20 Clarius Mobile Health Corp. Systems and methods for performing a measurement on an ultrasound image displayed on a touchscreen device
CN112686911A (en) * 2020-12-30 2021-04-20 北京爱奇艺科技有限公司 Control area generation method and device, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5081692A (en) * 1991-04-04 1992-01-14 Eastman Kodak Company Unsharp masking using center weighted local variance for image sharpening and noise suppression
CN101123677A (en) * 2006-08-11 2008-02-13 松下电器产业株式会社 Method, device and integrated circuit for improving image acuteness
US20150178946A1 (en) * 2013-12-19 2015-06-25 Google Inc. Image adjustment using texture mask
US20150348234A1 (en) * 2014-05-30 2015-12-03 National Chiao Tung University Method for image enhancement, image processing apparatus and computer readable medium using the same
CN105046658A (en) * 2015-06-26 2015-11-11 北京大学深圳研究生院 Low-illumination image processing method and device
CN106604057A (en) * 2016-12-07 2017-04-26 乐视控股(北京)有限公司 Video processing method and apparatus thereof
CN109242811A (en) * 2018-08-16 2019-01-18 广州视源电子科技股份有限公司 A kind of image alignment method and device thereof, computer readable storage medium and computer equipment
US20200265577A1 (en) * 2019-02-14 2020-08-20 Clarius Mobile Health Corp. Systems and methods for performing a measurement on an ultrasound image displayed on a touchscreen device
CN110189349A (en) * 2019-06-03 2019-08-30 湖南国科微电子股份有限公司 Image processing method and device
CN110766634A (en) * 2019-10-23 2020-02-07 华中科技大学鄂州工业技术研究院 Image division method and device based on guide filter and electronic equipment
CN112686911A (en) * 2020-12-30 2021-04-20 北京爱奇艺科技有限公司 Control area generation method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
任浩;谢磊;陈惠芳;: "基于动态边缘检测的图像锐化算法", 杭州电子科技大学学报, no. 04 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821030A (en) * 2022-04-11 2022-07-29 苏州振旺光电有限公司 Planet image processing method, system and device
CN114821030B (en) * 2022-04-11 2023-04-04 苏州振旺光电有限公司 Planet image processing method, system and device
CN116245770A (en) * 2023-03-22 2023-06-09 新光维医疗科技(苏州)股份有限公司 Endoscope image edge sharpness enhancement method and device, electronic equipment and storage medium
CN116245770B (en) * 2023-03-22 2024-03-22 新光维医疗科技(苏州)股份有限公司 Endoscope image edge sharpness enhancement method and device, electronic equipment and storage medium
CN116527922A (en) * 2023-07-03 2023-08-01 浙江大华技术股份有限公司 Image coding method and related device
CN116527922B (en) * 2023-07-03 2023-10-27 浙江大华技术股份有限公司 Image coding method and related device

Similar Documents

Publication Publication Date Title
US10635935B2 (en) Generating training images for machine learning-based objection recognition systems
US10198801B2 (en) Image enhancement using self-examples and external examples
CN113592776A (en) Image processing method and device, electronic device and storage medium
CN112308095A (en) Picture preprocessing and model training method and device, server and storage medium
CN111275626A (en) Video deblurring method, device and equipment based on ambiguity
CN108230333B (en) Image processing method, image processing apparatus, computer program, storage medium, and electronic device
CN109753971B (en) Correction method and device for distorted text lines, character recognition method and device
CN106934806B (en) It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus
CN110503704B (en) Method and device for constructing three-dimensional graph and electronic equipment
CN111008935B (en) Face image enhancement method, device, system and storage medium
US20160300331A1 (en) Scalable massive parallelization of overlapping patch aggregation
CN111951172A (en) Image optimization method, device, equipment and storage medium
DE112016005482T5 (en) Object detection with adaptive channel features
CN114926374B (en) Image processing method, device and equipment based on AI and readable storage medium
CN114049499A (en) Target object detection method, apparatus and storage medium for continuous contour
CN113744142A (en) Image restoration method, electronic device and storage medium
US20120170861A1 (en) Image processing apparatus, image processing method and image processing program
CN111415317B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110633705A (en) Low-illumination imaging license plate recognition method and device
CN111222446A (en) Face recognition method, face recognition device and mobile terminal
CN112532938B (en) Video monitoring system based on big data technology
CN103077396B (en) The vector space Feature Points Extraction of a kind of coloured image and device
CN113674144A (en) Image processing method, terminal equipment and readable storage medium
CN111915497A (en) Image black and white enhancement method and device, electronic equipment and readable storage medium
CN117496019B (en) Image animation processing method and system for driving static image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination