CN114612313A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN114612313A
CN114612313A CN202011582403.3A CN202011582403A CN114612313A CN 114612313 A CN114612313 A CN 114612313A CN 202011582403 A CN202011582403 A CN 202011582403A CN 114612313 A CN114612313 A CN 114612313A
Authority
CN
China
Prior art keywords
image
noise reduction
image block
block
reduction algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011582403.3A
Other languages
Chinese (zh)
Inventor
郝阳阳
杨胜凯
高源�
刘高敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN114612313A publication Critical patent/CN114612313A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

An image processing technology extracts image blocks from a first image according to a region of interest, and performs noise reduction on the first image and the image blocks by using noise reduction algorithms with different resource occupation levels, specifically: the noise reduction algorithm with high resource occupation, long time consumption and good noise reduction effect is used for the image blocks, the noise reduction algorithm with low resource occupation, short time consumption and poor noise reduction effect is used for the first image, and then the image after noise reduction and the image blocks after noise reduction are fused to obtain the image after noise reduction.

Description

Image processing method and device
Technical Field
The present disclosure relates to image processing, and in particular, to an image processing method and apparatus.
Background
The existing road gate snapshot camera can achieve satisfactory effect on the snapshot image of the passing vehicle in the daytime. However, for nighttime with insufficient lighting, the captured image tends to be too dark in picture and lose too much detail. Therefore, in the prior art, the light supplementing lamp is used for instant flashing to supplement light when the camera shoots the snapshot, so that more lighting is provided for the snapshot of the camera.
However, the illumination intensity of the supplementary lighting for the explosion flash cannot be too high, so as to prevent the supplementary lighting for the explosion flash from damaging the eyes of people in the vehicle. That is to say, the illumination intensity provided by the flash fill light still cannot meet the requirement of snapshot, and the illumination of the image captured at night is still low, so that a large amount of noise exists in the image, and the image of the image needs to be optimized by using a noise reduction algorithm.
The noise reduction algorithm requires a large amount of computing resources. Taking the noise reduction algorithm of the deep neural network as an example, a large amount of convolution operations are utilized, and the convolution operations consume a great amount of computation power, so that a large amount of time is consumed in the noise reduction process of the image. Therefore, how to reduce the time consumed by denoising the image on the basis of meeting the needs of the user and enable the user to obtain the denoised image as soon as possible is a problem which needs to be solved urgently.
Disclosure of Invention
In a first aspect, an image processing method is provided, including: extracting local image blocks from a first image, wherein the number of the local image blocks is one or more; generating a local noise reduction image block by using a noise reduction algorithm for the local image block; and fusing the local noise-reduced image block and a target image to generate a fused image, wherein the target image is obtained according to the first image.
In the scheme, only the local image block uses a noise reduction algorithm with high resource occupation, and the rest part of the image block does not need the noise reduction algorithm or uses the noise reduction algorithm with low resource occupation. The noise reduction method has the advantages that the whole image is prevented from being subjected to noise reduction by using a noise reduction algorithm with high resource occupation, and therefore the time for performing noise reduction processing on the image is saved.
In a first possible implementation manner of the first aspect,
the target image is one of: (1) the first image; (2) a first noise reduction image generated by reducing noise of the first image by using a noise reduction algorithm, wherein the resource occupation of the noise reduction algorithm used for generating the first noise reduction image is lower than that of the noise reduction algorithm used for generating the local noise reduction image block; (3) a background image block, wherein the background image block comprises a part of the first image except the local image block, and the size of the background image block is smaller than that of the first image; (4) and the background image blocks after noise reduction processing, wherein the resource occupation of a noise reduction algorithm used for generating the noise reduction background image blocks is lower than that of the noise reduction algorithm used for generating the local noise reduction image blocks.
In the scheme, four target image acquisition methods are exemplified.
In a second possible implementation manner of the first aspect, the local image block is an image block of interest of the first image.
The scheme provides a specific implementation scheme for selecting local image blocks.
In a third possible implementation manner of the first aspect, the local image blocks include a first image block and a second image block, and the method includes: denoising the first image block by using a first denoising algorithm to generate a first denoised image block; reducing noise of the second image block by using a second noise reduction algorithm to generate a second noise reduced image block, wherein: the resource occupation of the first noise reduction algorithm is higher than the resource occupation of the second noise reduction algorithm.
The scheme introduces a noise reduction strategy for reducing noise of local image blocks in a grading manner, and the local image blocks in different grades use noise reduction algorithms in different grades.
Optionally, the first image block is a first priority region of interest of the first image; the second image block is a second priority non-region of interest of the first image.
In a fourth possible implementation manner of the first aspect, the extracting a local image block from the first image specifically further includes: obtaining a region of interest of the second image through image detection, wherein the second image is the first image after format conversion; and taking the position of the region of interest of the second image in the second image as the position of the region of interest of the first image in the first image, and extracting the local image blocks from the first image according to the region of interest.
This solution provides a concrete way of how to find the ROI area.
Optionally, the second image is a three-channel image.
In a fifth possible implementation manner of the first aspect, the first image is a RAW format image.
In a sixth possible implementation manner of the first aspect, the first noise reduction algorithm is a noise reduction algorithm of a deep neural network model; the second noise reduction algorithm is a noise reduction algorithm of a deep neural network model.
In a seventh possible implementation manner of the first aspect, the noise reduction effect of the first noise reduction algorithm is better than the noise reduction effect of the second noise reduction algorithm.
In a second aspect, there is provided an image processing apparatus comprising: the image block acquisition module is used for extracting local image blocks from a first image, wherein the number of the local image blocks is one or more; the noise reduction module is used for generating a local noise reduction image block by using a noise reduction algorithm on the local image block; and the fusion module is used for fusing the local noise reduction image block and a target image to generate a fused image, wherein the target image is obtained according to the first image.
The second aspect has various possible implementations similar to the first aspect, with corresponding technical effects.
In a third aspect, a camera is provided, which includes a lens for collecting light; the sensor is used for generating a first image by performing photoelectric conversion on light collected by the lens; a processor adapted to carry out the first aspect and various possible implementations of the first aspect, and to have corresponding technical effects
Drawings
Fig. 1 is an embodiment of a camera configuration.
FIG. 2 is a flow diagram of an embodiment of a method of image processing.
Fig. 3 is a block diagram of an embodiment of an image processing apparatus.
Detailed Description
The brightness of a picture taken by an image pickup apparatus (hereinafter, described by taking a camera as an example) often has a direct relationship with the amount of light entering the camera. Therefore, when light is insufficient or when exposure parameters of the camera are improperly selected, the resulting image may have a lot of noise. In order to make the image clear and convenient for the user to use, it takes a lot of time and computing resources to perform noise reduction on the image by using noise reduction calculation.
Considering that the user needs to view mainly the region of interest, the rest of the area in the image is not the focus of the user's attention. In this embodiment, one or more image blocks (called local image blocks) are extracted from a real image (RAW image) according to a region of interest (ROI), the size of the extracted image block is smaller than that of the real image, and the extracted image block is subjected to noise reduction processing by using a noise reduction algorithm with high resource (e.g., computing resource, processor resource, etc.), long time consumption, and good noise reduction effect. And the real image carries out noise reduction processing by using a noise reduction algorithm with low resource occupation, less time consumption and poor noise reduction effect. And then, fusing the image block subjected to noise reduction and the real image subjected to noise reduction to obtain a fused image. The method is equivalent to using different noise reduction algorithms for different areas of the image, not only can make the key area of the image clear, but also reduces time consumption and occupation of computing resources compared with the technology of using the noise reduction algorithm with good noise reduction effect for the whole image. The noise reduction algorithm mentioned in the embodiment of the present invention is, for example, a noise reduction algorithm of a deep neural network model.
For example, for a vehicle image captured by a camera (e.g., a bayonet camera) in an intelligent transportation system (intelligent transportation system), an embodiment of the present invention may detect a position of a car in the image through a vehicle detection algorithm, and use an image block where the car is located as an area of interest (ROI), and use the remaining image block as a background image block. And carrying out high-time-consumption and high-precision noise reduction on the ROI, carrying out low-time-consumption and low-precision noise reduction on the background image block, and then fusing the RAW image after noise reduction to obtain the RAW image after noise reduction. There are two schemes for selecting the ROI area, wherein one scheme is to select the area corresponding to the ROI; another is to divide the image into a plurality of patches and extract patches from them that intersect the ROI.
As shown in fig. 1, the camera 1 includes: a lens 11, an optical filter 12, an image sensor (sensor)13, a system on chip (SoC) 14, and a fill-in lamp control circuit 15, and optionally, a built-in fill-in lamp 16 may be further included. The system on chip specifically includes an Image Signal Processor (ISP)141 and an encoder 142. Depending on the design, the fill-in light 16 may also be independent of the camera, i.e. not part of the camera. Note that the image signal processor 141 and the encoder 142 may be separate chips, and are not integrated in the SOC.
And a lens 11 for collecting light from the subject 10. The lens 11 is generally a lens group composed of one or more pieces of optical glass (or plastic), and may be composed of a lens or a combination of lenses such as a concave lens, a convex lens, an M-type lens, and the like. The lens 11 may be a spherical mirror or an aspherical mirror. Depending on the number of lenses, monocular cameras and monocular cameras may be used. The camera illustrated in fig. 1 has only 1 lens (lens 11) and thus belongs to a monocular camera.
And an optical filter 12, the surface of which is provided with a coating, and the light from the lens 11 is filtered by the coating, so that the optical signal with the required wave band is selected. The filter 12 is optional and may not be provided; or the filter 12 may be disposed in front of the lens 11.
The sensor 13 receives the optical signal filtered by the filter 12, performs photoelectric conversion, and generates an infrared image and/or a color image. The sensor 13 may be exposed to only non-visible light (subsequently exemplified by infrared light), or to only visible light, or to infrared light, visible light simultaneously, or to visible light and infrared light alternately. Taking the example of a video with an image rate of 25 images/second, the sum of the total exposure times to obtain an image is 1/25 seconds. The image generated by the sensor is a real image (which may also be referred to as a real image); the real image is in a RAW file format, and is therefore also referred to as a RAW image or a bayer image. Based on the difference of photosensitive elements, RAW images can be divided into various types such as RGGB, RYYB, RCCC, RCCB, RGBW, CMYW and the like. The file format of the RAW image is, for example,. 3fr,. ari,. dcs,. dcr,. drf,. eip, and. erf. The RAW image may be from an image scanner or a motion picture film scanner, in addition to the camera.
An Image Signal Processor (ISP)141 receives the RAW image transmitted from the sensor 13, and converts the RAW image into a YUV image, or an RGB image, or an HSV format, an Lab format, a CMY format, or a YCbCr format. These formats of images can be collectively referred to as natural images, and are also referred to as three-channel images because each pixel point in the image is represented by 3 values.
The ISP can perform image optimization to optimize the performance of the imagery in optical terms. The content of the image optimization may be, for example, one or more of the following: removing bottom current noise, performing linearization processing on data, solving the problem of data nonlinearity, removing dead pixels, removing noise, adjusting white balance, focusing and exposure, adjusting the sharpness of an image, performing color space conversion (converting to different color spaces for processing), and the like.
In addition, the image signal processor 141 may also send a control signal to the fill-in lamp control circuit 15, so as to control the operation of the fill-in lamp 12. For example, the fill-in duration and fill-in intensity of the fill-in lamp are controlled. The fill light is, for example, a flashing light.
The encoder 142 may encode the image. Such as an image encoded as a JPEG file, or a plurality of images encoded as video in h.264, h.265 files.
Optionally, an image processor may be further included between the image signal processor 141 and the encoder 142, and image fusion (image fusion) is performed between a plurality of images processed by the image processor 141, or between an image and an image block.
Referring to fig. 2, an embodiment of a flow of an image processing method is provided below. The method may be performed by the processor 14.
Step 21, the camera detects a moving object. For example, a camera detects a monitoring image block, and when a moving object is detected to enter a monitoring area (e.g., touching a virtual tripwire), a snapshot is triggered. Moving objects such as vehicles, aircraft, pedestrians, and animals. The following description will be made taking a vehicle as an example.
Specifically, a sensor of the camera generates an image in a RAW format after being exposed to light, then the image in the RAW format is converted into an image in a YUV format, and the image in the YUV format is encoded and then sent to a display device for displaying.
And when the camera detects that the moving object in the YUV image enters the monitoring area, sending a snapshot instruction.
Step 22, the camera takes a first image. To increase the brightness of the first image, a flash (flash) lamp may be activated during the snapshot, and the flash lamp may not be activated in step 21.
The first image is a RAW format image generated by a sensor of a camera.
At step 23, the ISP of the camera performs format conversion on the first image, and converts the first image in RAW format into a second image in a second format (e.g., YUV). And performing ROI detection on the second image, extracting one or more image blocks from the first image according to the position of the ROI in the first image, wherein the unextracted part is called a background, and the total size of the extracted image blocks is equal to or smaller than that of the second image.
The ISP first uses a vehicle detection model based on a deep neural network to obtain a detection box of the vehicle, which describes the position of the vehicle in the second image. Since different vehicles occupy different areas on the screen, for example, a truck occupies a larger area than a car, the area occupied in the second image is also different. Thus, the ISP generates an ROI box that matches the size of the ROI object (e.g., vehicle, face) based on the detection box.
The present embodiment may apply the division of the ROI from the second image to the extraction of the image block from the first image. Optionally, the ROI may be further divided into a plurality of levels. For example: when a plurality of vehicles exist in the picture, the smaller vehicle is listed as the ROI with the first priority, the larger vehicle is listed as the ROI with the second priority, the larger vehicle is listed as the ROI with the third priority, and the like; or, the face area of the driver in the vehicle is listed as the ROI with the first priority, and the whole vehicle is listed as the ROI with the second priority; alternatively, pedestrians are listed as a first priority ROI and vehicles are listed as a second priority ROI.
The second image is composed of ROIs of different levels; alternatively, the second image is composed of an ROI and a non-ROI. When the second image is composed of the ROI and the non-ROI, the remaining portion of the second image except for the ROI is the non-ROI (or referred to as a background image block).
The division of the regions of the second image is described below by way of example.
(1) The first region is the ROI (region where the vehicle is located) and the second region (e.g., the background in the second image) remains.
(2) The first region and the second region are both ROIs, wherein the first region is a first priority ROI, the second region is a second priority ROI, and the remainder is a third region (e.g., background in the second image).
And 24, dividing the area of the second image according to the step 23, extracting a local image block from the first image, and reducing noise of the image block by using a noise reduction algorithm according to a noise reduction strategy. And reducing the noise of the first image without using a noise reduction algorithm with low resource occupation.
Since the pictures recorded by the first image and the second image are the same, the image block can be obtained from the first image by inheriting the area division method of the second image.
The following describes the corresponding noise reduction strategy in turn for the 3 cases exemplified in step 23.
For case (1)
The region where the ROI (image block where the vehicle is located) in the first image is defined as a first image block.
Noise reduction strategy: the first image block generates a first noise reduction image block by using a first noise reduction algorithm with high resource occupation, and the first image generates a first noise reduction image by using a second noise reduction algorithm with low resource occupation; or the noise of the first image block is reduced by using a noise reduction algorithm, and the noise of the first image block is not reduced.
The second region may or may not reduce noise.
For case (2)
The region where the first priority ROI is located in the first image is defined as a first image block, and the region where the second priority ROI is located in the first image is defined as a second image block.
Noise reduction strategy: the first image block generates a first noise reduction image block by using a first noise reduction algorithm with high resource occupation, the second image block generates a second noise reduction image block by using a third noise reduction algorithm with low resource occupation, and the first image generates a first noise reduction image by using a second noise reduction algorithm with lowest resource occupation; or the first image block generates a first noise reduction image block by using a first noise reduction algorithm with high resource occupation, the second image block generates a second noise reduction image block by using a third noise reduction algorithm with low resource occupation, and the first image does not reduce noise.
The fourth region may or may not reduce noise.
Therefore, in the embodiment, the noise of the image block extracted from the first image is reduced by using the noise reduction algorithm with high resource occupation, and the first noise reduction image with low resource occupation is used for the first image; or, the noise of the image blocks extracted from the first image is reduced by using a noise reduction algorithm with high resource occupation, and the noise of the first image is not reduced.
And step 25, fusing the first image (or the first image after noise reduction) and the image block processed by the noise reduction algorithm to generate a fused image. During fusion, a weighted fusion algorithm can be used for the boundary area of the image block and the image, so that the fused image is smoother.
For the case described in (1) in steps 23, and 24 above, the specific fusion process is as follows.
When the noise reduction strategy is: the first image block reduces noise by using a first noise reduction algorithm with high resource occupation, and the first image does not reduce noise. The fusion method is: and fusing the first noise-reduced image block and the first image to generate a first fused image. In the first fused image obtained after the fusion, the noise of the region corresponding to the first noise-reduced image block (i.e. the aforementioned ROI) is less than that of the rest of the regions.
When the noise reduction strategy is: the first image block uses a first noise reduction algorithm with high resource occupation, and the first image uses a second noise reduction algorithm with low resource occupation. The fusion method is: and fusing the first noise-reduced image block and the first noise-reduced image to generate a second fused image. In the fused image obtained after the fusion, the noise of the region corresponding to the first image block (i.e. the aforementioned ROI) is less than that of the rest of the regions.
In the case described in (2) above, the specific fusion process is as follows.
When the noise reduction strategy is: the first image block uses a first noise reduction algorithm with high resource occupation, the second image block uses a third noise reduction algorithm with low resource occupation, and the first image does not reduce noise. The fusion method is: and fusing the first noise-reduced image block, the second noise-reduced image block and the first image to generate a third fused image. The third fused image includes three regions: the noise of the three areas, namely the area where the first priority ROI is located, the area where the second priority ROI is located and the rest areas, is increased in sequence.
When the noise reduction strategy is: the first image block uses a first noise reduction algorithm with high resource occupation, the second image block uses a third noise reduction algorithm with low resource occupation, and the first image uses a second noise reduction algorithm with lowest resource occupation. The fusion method is: and fusing the first noise reduction image block, the second noise reduction image block and the first noise reduction image to generate a fourth noise reduction image. The fused image includes three regions: the third image block is the area where the first priority ROI is located, the third image block is the area where the second priority ROI is located in the first image, and the noise of the three areas is sequentially increased.
In step 25, the first image (or the noise-reduced first image) and the image block subjected to the noise reduction algorithm are fused to generate a fused image. In another embodiment, instead of using the complete first image (or the noise-reduced first image), only a portion of it may be used for fusion, which may reduce the amount of data to be fused and the amount of data to be noise-reduced. That is, the first image (or the noise-reduced first image) may be replaced by a part of the first image (for simplicity, this part of the first image is referred to as a background image block), or replaced by a noise-reduced background image block obtained by reducing the noise of the background image block. Whether the first image after noise reduction or the background image block after noise reduction, the resource occupation of the noise reduction algorithm used by the image block extracting method is less than the resource occupation of the noise reduction algorithm used by the image block extracting method from the first image in step 23. In such an embodiment, a step of live background image blocks (or noise-reduced background image blocks) may be additionally added.
In addition, for convenience of description, when a certain image (image block) is not denoised by using the denoising algorithm, the image (image block) may also be considered to use a denoising algorithm with resource occupation of 0 and time consumption of 0. In this scenario, between the first image (or the noise-reduced first image) and the image block processed by the noise reduction algorithm, the non-overlapped part is directly used as a part of the fused image, and the overlapped part is fused by using the fusion algorithm, so as to obtain the fused image.
The size of the background image block is smaller than that of the first image. However, the size of the background image block cannot be too small to avoid the fused image from generating a hole, and therefore, the background image block at least includes: the parts not extracted from the first image in step 24. And the background image block comprises a part of the first image except the local image block, and the size of the background image block is smaller than that of the first image.
It should be noted that, in this embodiment, the image block is acquired from the first image based on the ROI. However, the image blocks are obtained in more than one way depending on the ROI, for example, there may be one way: taking the central area of the first image as an area (such as a first image block) needing to use a noise reduction algorithm, and taking the rest areas as background areas; alternatively, if the vehicle tends to enter the surveillance view from above the camera surveillance range, the upper 1/2 of the first image may be the area (e.g., first image block) that requires the use of the noise reduction algorithm and the lower 1/2 may be the background area.
It should be further noted that the present embodiment implements image block division of the first image by means of image block division of the second image. In fact, the image block division may be performed on the first image by other means, and the division methods include: directly performing image block division on the first image by a computer program; carrying out image block division on the first image by virtue of manual work; the first image is image-segmented by means of the third image (for example: the image block of the previous frame image of the first image, captured by the camera, is divided as the image block of the first image).
Referring to fig. 3, the present invention further provides an embodiment of an image processing apparatus. The image processing means may be a physical hardware device such as a camera, or the image processing means may be software such as a program running in the camera. The image processing device comprises an image block acquisition module 31, a noise reduction module 32 and a fusion module 33. Since the details are specifically described in the foregoing embodiments of the image processing method, the functions of the respective modules will be described only briefly, and will not be described in detail.
The image block obtaining module 31 is configured to extract local image blocks from the first image, where the number of the local image blocks is one or more.
And the noise reduction module 32 is configured to generate a local noise-reduced image block by using a noise reduction algorithm on the local image block.
And a fusion module 33 for fusing the local noise-reduced image block and a target image to generate a fused image, wherein the target image is obtained according to the first image.
Wherein the target image may be one of four cases: (1) the first image; (2) a first noise reduction image generated by noise reduction of the first image by using a noise reduction algorithm, wherein the resource occupation of the first image by using the noise reduction algorithm is lower than that of the noise reduction algorithm used for generating the local noise reduction image block; (3) a background image block, wherein the background image block comprises a part of the first image except the local image block, and the size of the background image block is smaller than that of the first image; (4) and reducing the noise of the background image block by using a second noise reduction algorithm to generate a second noise reduced image block. Wherein the noise reduction operation can be implemented by the noise reduction module 32.
Optionally, the local image blocks include a first image block and a second image block, and the noise reduction module is configured to: denoising the first image block by using a first denoising algorithm to generate a first denoised image block; reducing noise of the second image block by using a second noise reduction algorithm to generate a second noise reduced image block, wherein: the resource occupation of the first noise reduction algorithm is higher than that of the second noise reduction algorithm.
Optionally, the extracted image block obtaining module is specifically configured to: obtaining a region of interest of the second image through image detection, wherein the second image is the first image after format conversion; and taking the position of the region of interest of the second image in the second image as the position of the region of interest of the first image in the first image, and extracting the local image blocks from the first image according to the region of interest.
One or more of the above modules or units may be implemented in software, hardware or a combination of both. When any of the above modules or units are implemented in software, which is present as computer program instructions and stored in a memory, a processor may be used to execute the program instructions and implement the above method flows. The processor may include, but is not limited to, at least one of: various computing devices that run software, such as a Central Processing Unit (CPU), a microprocessor, a Digital Signal Processor (DSP), a Microcontroller (MCU), or an artificial intelligence processor, may each include one or more cores for executing software instructions to perform operations or processing. The processor may be built in an SoC (system on chip) or an Application Specific Integrated Circuit (ASIC), or may be a separate semiconductor chip. The processor may further include a necessary hardware accelerator such as a Field Programmable Gate Array (FPGA), a PLD (programmable logic device), or a logic circuit for implementing a dedicated logic operation, in addition to a core for executing software instructions to perform an operation or a process.
When the above modules or units are implemented in hardware, the hardware may be any one or any combination of a CPU, a microprocessor, a DSP, an MCU, an artificial intelligence processor, an ASIC, an SoC, an FPGA, a PLD, a dedicated digital circuit, a hardware accelerator, or a discrete device that is not integrated, which may run necessary software or is independent of software to perform the above method flows.

Claims (21)

1. An image processing method, comprising:
extracting local image blocks from a first image, wherein the number of the local image blocks is one or more;
generating a local noise reduction image block by using a noise reduction algorithm for the local image block;
and fusing the local noise-reduced image block and a target image to generate a fused image, wherein the target image is obtained according to the first image.
2. The method of claim 1, wherein the target image comprises one of:
(1) the first image;
(2) a first noise reduction image generated by reducing noise of the first image by using a noise reduction algorithm, wherein the resource occupation of the noise reduction algorithm used for generating the first noise reduction image is lower than that of the noise reduction algorithm used for generating the local noise reduction image block;
(3) a background image block, wherein the background image block comprises a part of the first image except the local image block, and the size of the background image block is smaller than that of the first image;
(4) and the resources of a noise reduction algorithm used for generating the noise-reduced background image block occupy less resources than the resources of the noise reduction algorithm used for generating the local noise-reduced image block.
3. The method according to claim 1 or 2, characterized in that:
the local image patches are image patches of interest of the first image.
4. The method according to claim 1 or 2, wherein the local image block comprises a first image block and a second image block, the method comprising:
denoising the first image block by using a first denoising algorithm to generate a first denoised image block;
reducing noise of the second image block by using a second noise reduction algorithm to generate a second noise reduced image block, wherein: the resource occupation of the first noise reduction algorithm is higher than the resource occupation of the second noise reduction algorithm.
5. The method of claim 4, wherein:
the first image block is a first priority region of interest of the first image;
the second image block is a second priority non-region of interest of the first image.
6. The method according to claim 1 or 2, wherein extracting a local image block from the first image further comprises:
obtaining a region of interest of the second image through image detection, wherein the second image is the first image after format conversion;
and taking the position of the region of interest of the second image in the second image as the position of the region of interest of the first image in the first image, and extracting the local image blocks from the first image according to the region of interest.
7. The method of claim 6, wherein:
the second image is a three-channel image.
8. The method according to any one of claims 1 to 6, wherein:
the first image is a RAW format image.
9. The method according to claim 1 or 2, characterized in that:
the first noise reduction algorithm is a noise reduction algorithm of a deep neural network model;
the second noise reduction algorithm is a noise reduction algorithm of a deep neural network model.
10. The method according to claim 1 or 2, characterized in that:
the noise reduction effect of the first noise reduction algorithm is better than that of the second noise reduction algorithm.
11. An image processing apparatus characterized by comprising:
the image block acquisition module is used for extracting local image blocks from a first image, wherein the number of the local image blocks is one or more;
the noise reduction module is used for generating a local noise reduction image block by using a noise reduction algorithm on the local image block;
and the fusion module is used for fusing the local noise reduction image block and a target image to generate a fused image, wherein the target image is obtained according to the first image.
12. The image processing apparatus according to claim 11, wherein the target image includes one of:
(1) the first image;
(2) a first noise reduction image generated by reducing noise of the first image by using a noise reduction algorithm, wherein the resource occupation of the noise reduction algorithm used for generating the first noise reduction image is lower than that of the noise reduction algorithm used for generating the local noise reduction image block;
(3) a background image block, wherein the background image block comprises a part of the first image except the local image block, and the size of the background image block is smaller than that of the first image;
(4) and the resources of a noise reduction algorithm used for generating the noise-reduced background image block occupy less resources than the resources of the noise reduction algorithm used for generating the local noise-reduced image block.
13. The image processing apparatus according to claim 11 or 12, characterized in that:
the local image block is an image block of interest of the first image.
14. The image processing apparatus according to claim 11 or 12, wherein the local image blocks include a first image block and a second image block, and the noise reduction module is configured to:
denoising the first image block by using a first denoising algorithm to generate a first denoised image block;
reducing noise of the second image block by using a second noise reduction algorithm to generate a second noise reduced image block, wherein: the resource occupation of the first noise reduction algorithm is higher than the resource occupation of the second noise reduction algorithm.
15. The image processing apparatus according to claim 14, characterized in that:
the first image block is a first priority region of interest of the first image;
the second image block is a second priority non-region of interest of the first image.
16. The method according to claim 11, wherein the extracted image block obtaining module is specifically configured to:
obtaining a region of interest of the second image through image detection, wherein the second image is the first image after format conversion;
and taking the position of the region of interest of the second image in the second image as the position of the region of interest of the first image in the first image, and extracting the local image blocks from the first image according to the region of interest.
17. The method of claim 16, wherein:
the second image is a three-channel image.
18. The method according to any one of claims 11-17, wherein:
the first image is a RAW format image.
19. The method of claim 11, wherein:
the first noise reduction algorithm is a noise reduction algorithm of a deep neural network model;
the second noise reduction algorithm is a noise reduction algorithm of a deep neural network model.
20. The method of claim 11, wherein:
the noise reduction effect of the first noise reduction algorithm is better than that of the second noise reduction algorithm.
21. A camera, comprising:
the lens is used for collecting light;
the sensor is used for generating a first image by performing photoelectric conversion on light collected by the lens;
a processor for performing the method of any one of claims 1-10.
CN202011582403.3A 2020-12-09 2020-12-28 Image processing method and device Pending CN114612313A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020114287074 2020-12-09
CN202011428707 2020-12-09

Publications (1)

Publication Number Publication Date
CN114612313A true CN114612313A (en) 2022-06-10

Family

ID=81856004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011582403.3A Pending CN114612313A (en) 2020-12-09 2020-12-28 Image processing method and device

Country Status (1)

Country Link
CN (1) CN114612313A (en)

Similar Documents

Publication Publication Date Title
CN107370958B (en) Image blurs processing method, device and camera terminal
CN110572573B (en) Focusing method and device, electronic equipment and computer readable storage medium
US20170006234A1 (en) Image display device and image display system
WO2021073140A1 (en) Monocular camera, and image processing system and image processing method
JP2022071177A (en) Multiplexed high dynamic range image
CN104134352A (en) Video vehicle characteristic detection system and detection method based on combination of long exposure and short exposure
US11587259B2 (en) Fixed pattern calibration for multi-view stitching
CN105809131A (en) Method and system for carrying out parking space waterlogging detection based on image processing technology
US11696039B2 (en) Smart IP camera with color night mode
CN110443766B (en) Image processing method and device, electronic equipment and readable storage medium
Wójcikowski et al. FPGA-based real-time implementation of detection algorithm for automatic traffic surveillance sensor network
CN113170048A (en) Image processing device and method
KR20230137316A (en) Image fusion for scenes with objects at multiple depths
CN117274107B (en) End-to-end color and detail enhancement method, device and equipment under low-illumination scene
CN112204566A (en) Image processing method and device based on machine vision
JP4821399B2 (en) Object identification device
CN114612313A (en) Image processing method and device
CN113949802A (en) Image processing method and camera
US20230419505A1 (en) Automatic exposure metering for regions of interest that tracks moving subjects using artificial intelligence
CN109309788A (en) More lens image splicing apparatus and method
WO2022078036A1 (en) Camera and control method therefor
US11574484B1 (en) High resolution infrared image generation using image data from an RGB-IR sensor and visible light interpolation
CN111383242B (en) Image fog penetration processing method and device
CN112270639A (en) Image processing method, image processing device and storage medium
CN112907454A (en) Method and device for acquiring image, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination