CN116721043A - License plate image processing method, device and equipment - Google Patents

License plate image processing method, device and equipment Download PDF

Info

Publication number
CN116721043A
CN116721043A CN202310300835.8A CN202310300835A CN116721043A CN 116721043 A CN116721043 A CN 116721043A CN 202310300835 A CN202310300835 A CN 202310300835A CN 116721043 A CN116721043 A CN 116721043A
Authority
CN
China
Prior art keywords
image
pixel value
intermediate image
fusion
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310300835.8A
Other languages
Chinese (zh)
Inventor
王祖力
高浩然
王程伟
陈烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202310300835.8A priority Critical patent/CN116721043A/en
Publication of CN116721043A publication Critical patent/CN116721043A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application provides a license plate image processing method, device and equipment, wherein the method comprises the following steps: acquiring a first intermediate image corresponding to a first original image and a second intermediate image corresponding to a second original image, wherein the first original image and the second original image both comprise license plates of a moving vehicle; generating a displacement difference image between the first intermediate image and the second intermediate image based on the first intermediate image, the second intermediate image, and a noise level corresponding to the first intermediate image; based on a displacement difference image between the first intermediate image and the second intermediate image, carrying out frame difference fusion on the first intermediate image and the second intermediate image to obtain a frame difference weight fusion image; and generating a target image to be output based on the frame difference weight fusion image. By means of the technical scheme, the displacement difference images are combined to conduct image fusion, information of the bright area and the dark area of each image can be fully utilized, and the visual effect of the fused image is better.

Description

License plate image processing method, device and equipment
Technical Field
The present application relates to the field of image processing, and in particular, to a license plate image processing method, device and equipment.
Background
Under the irradiation of strong light source (such as sunlight, lamp or reflecting light, etc.), there are high brightness area, shadow/backlight area, etc. in the image, that is, the bright area of the image becomes white due to overexposure, and the dark area becomes black due to underexposure, which seriously affects the image quality. The appearance of the camera in the same scene for the brightest and darker areas is limited, which is the "dynamic range".
Because the dynamic range has limitation, the image can not simultaneously consider a bright area and a dark area, for example, the dark area of the image has the problems of underexposure and the like, and the bright area of the image has the problems of overexposure and the like. In order to solve the problem that both the bright area and the dark area cannot be considered, multiple images of the same scene can be acquired through different exposure amounts, and the multiple images are fused into an image with a high dynamic range (also called as an image with a wide dynamic range), and the image can keep information such as colors, details and the like of the bright area and the dark area. Compared with a common image, the image with high dynamic range can provide more dynamic range and image details, and better visual experience is provided for users.
Although the high dynamic range image can keep useful information such as colors and details of a bright area and a dark area, the high dynamic range image is obtained by fusing a plurality of images, so that when the images are fused, the information of the bright area and the dark area of each image cannot be fully utilized, the useful detailed information of an overexposed area and an overexposed area can still be lost, and the fused image still has the problems of poor visual effect and the like.
Disclosure of Invention
The application provides a license plate image processing method, which comprises the following steps:
acquiring a first intermediate image corresponding to a first original image and a second intermediate image corresponding to a second original image, wherein the exposure of the first original image is larger than that of the second original image; wherein the first original image and the second original image each comprise a license plate of a moving vehicle;
generating a displacement difference image between the first intermediate image and the second intermediate image based on the first intermediate image, the second intermediate image, and a noise level corresponding to the first intermediate image;
based on a displacement difference image between the first intermediate image and the second intermediate image, carrying out frame difference fusion on the first intermediate image and the second intermediate image to obtain a frame difference weight fusion image;
and generating a target image to be output based on the frame difference weight fusion image.
The application provides a license plate image processing device, which comprises:
the acquisition module is used for acquiring a first intermediate image corresponding to a first original image and a second intermediate image corresponding to a second original image, wherein the exposure of the first original image is larger than that of the second original image; wherein the first original image and the second original image each comprise a license plate of a moving vehicle;
A processing module, configured to generate a displacement difference image between the first intermediate image and the second intermediate image based on the first intermediate image, the second intermediate image, and a noise level corresponding to the first intermediate image; based on a displacement difference image between the first intermediate image and the second intermediate image, carrying out frame difference fusion on the first intermediate image and the second intermediate image to obtain a frame difference weight fusion image; and generating a target image to be output based on the frame difference weight fusion image.
The present application provides an electronic device including: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute the machine executable instructions to implement the license plate image processing method according to the above example of the present application.
As can be seen from the above technical solutions, in the embodiments of the present application, frame difference fusion is performed on a first intermediate image and a second intermediate image based on a displacement difference image between the first intermediate image and the second intermediate image, so as to obtain a frame difference weight fusion image, so that image fusion is performed by combining the displacement difference images, information of a bright area and a dark area of each image can be fully utilized, useful detailed information of an overexposed area and an overexposed area is fully utilized, and visual effects of the fused images are better. For example, the displacement difference of license plates in different images is judged based on the displacement difference images, and the fused weight is determined through the displacement difference, so that the problem of fusion ghost of the license plates is solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly describe the drawings required to be used in the embodiments of the present application or the description in the prior art, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to these drawings of the embodiments of the present application for a person having ordinary skill in the art.
Fig. 1 is a flowchart of a license plate image processing method according to an embodiment of the present application;
FIG. 2 is a high dynamic range image based on luminance fusion in one embodiment of the application;
FIG. 3 is a flowchart of a license plate image processing method according to an embodiment of the present application;
FIG. 4 is a schematic illustration of luminance fusion of a first intermediate image and a second intermediate image;
FIG. 5 is a schematic diagram of frame difference fusion of a first intermediate image and a second intermediate image;
FIG. 6 is a schematic diagram of fusing a luminance weight fused image and a frame difference weight fused image;
fig. 7 is a schematic diagram of a license plate image processing apparatus in an embodiment of the present application;
fig. 8 is a hardware configuration diagram of an electronic device in an embodiment of the application.
Detailed Description
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to any or all possible combinations including one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application. Depending on the context, furthermore, the word "if" used may be interpreted as "at … …" or "at … …" or "in response to a determination".
The license plate image processing method provided by the embodiment of the application can be applied to front-end equipment (such as a network camera, an analog camera, a camera and the like) and also can be applied to back-end equipment (such as a server, management equipment, storage equipment and the like). If the method is applied to the front-end equipment, the front-end equipment acquires a first original image and a second original image aiming at the same target scene, and based on the first original image and the second original image, the front-end equipment can process license plate images by adopting the scheme of the embodiment of the application. If the method is applied to the back-end equipment, the front-end equipment acquires a first original image and a second original image aiming at the same target scene, the first original image and the second original image are sent to the back-end equipment, and the back-end equipment can process license plate images by adopting the scheme of the embodiment of the application based on the first original image and the second original image.
Referring to fig. 1, a flow chart of the license plate image processing method is shown, and the method may include:
step 101, a first intermediate image corresponding to a first original image and a second intermediate image corresponding to a second original image are obtained, where the exposure of the first original image may be greater than the exposure of the second original image. Wherein, the first original image and the second original image each comprise a license plate of the moving vehicle, and therefore, the first original image and the second original image may be referred to as license plate images, taking the first original image and the second original image as examples.
For example, the time domain noise reduction may be performed on the first original image to obtain a time domain noise reduced image, an exposure difference proportion is determined based on the exposure amount of the first original image and the exposure amount of the second original image, and the time domain noise reduced image is adjusted (such as image alignment adjustment) based on the exposure difference proportion to obtain an adjusted image; and generating a first intermediate image corresponding to the first original image based on the adjusted image.
For example, spatial denoising may be performed on the second original image to obtain a spatial denoised image, and a second intermediate image corresponding to the second original image is generated based on the spatial denoised image.
Step 102, generating a displacement difference image between the first intermediate image and the second intermediate image based on the first intermediate image, the second intermediate image and a noise level corresponding to the first intermediate image.
Illustratively, generating a displacement difference image between the first intermediate image and the second intermediate image based on the first intermediate image, the second intermediate image, and a noise level corresponding to the first intermediate image may include, but is not limited to: and determining a corresponding motion value of each pixel point in the displacement difference image based on a first pixel value corresponding to the pixel point in the first intermediate image, a second pixel value corresponding to the pixel point in the second intermediate image and a noise level corresponding to the first pixel value for each pixel point in the displacement difference image.
Wherein the larger the motion value, the larger the displacement difference is indicated. The noise level may be obtained by querying a mapping relationship through the first pixel value, where the mapping relationship is used to represent a relationship between a pixel value and a noise level, and when the pixel value is larger, the noise level corresponding to the pixel value is smaller.
And 103, performing frame difference fusion on the first intermediate image and the second intermediate image based on the displacement difference image between the first intermediate image and the second intermediate image to obtain a frame difference weight fusion image.
Illustratively, based on the displacement difference image between the first intermediate image and the second intermediate image, performing frame difference fusion on the first intermediate image and the second intermediate image to obtain a frame difference weight fusion image, which may include, but is not limited to: for each pixel point in the frame difference weight fusion image, if the corresponding target motion value of the pixel point in the displacement difference image is smaller than a first displacement difference threshold value, determining the corresponding target pixel value of the pixel point in the frame difference weight fusion image based on the corresponding first pixel value of the pixel point in the first intermediate image; if the target motion value is greater than the second displacement difference threshold value, determining a target pixel value based on a second pixel value corresponding to the pixel point in the second intermediate image; and if the target motion value is between the first displacement difference threshold value and the second displacement difference threshold value, carrying out weighting operation on the first pixel value and the second pixel value to obtain the target pixel value.
And 104, generating a target image to be output based on the frame difference weight fusion image.
In one possible implementation, after obtaining the frame difference weight fusion image, the frame difference weight fusion image may be taken as the target image. Or, based on the brightness information corresponding to the first intermediate image or the second intermediate image, the first intermediate image and the second intermediate image can be subjected to brightness fusion to obtain a brightness weight fusion image, the brightness weight fusion image and the frame difference weight fusion image are fused to obtain a fused image, and a target image to be output is generated based on the fused image. For example, the fused image may be used as the target image, or the fused image may be processed to obtain the target image.
For example, when the luminance weight fusion image and the frame difference weight fusion image are fused, weighting operation may be performed on the luminance weight fusion image and the frame difference weight fusion image to obtain a fused image, for example, weighting operation is performed on the basis of the luminance weight fusion image, the weight coefficient of the frame difference weight fusion image and the weight coefficient of the frame difference weight fusion image to obtain a fused image. Or, a time domain noise-reduced image corresponding to the first original image can be obtained, and a motion area corresponding to the motion vehicle in the first original image is obtained based on the time domain noise-reduced image; and based on the motion region corresponding to the moving vehicle in the first original image, performing motion region fusion on the brightness weight fusion image and the frame difference weight fusion image to obtain a fused image. Of course, the above is only two examples of fusion methods, and is not limited thereto.
Illustratively, based on the luminance information corresponding to the first intermediate image, luminance fusion is performed on the first intermediate image and the second intermediate image to obtain a luminance weight fusion image, which may include, but is not limited to: for each pixel point in the brightness weight fusion image, if the corresponding first pixel value of the pixel point in the first intermediate image is smaller than the third brightness threshold value A1, the corresponding target pixel value of the pixel point in the brightness weight fusion image can be determined based on the first pixel value. Alternatively, if the first pixel value is greater than the fourth luminance threshold B1, the target pixel value may be determined based on a corresponding second pixel value of the pixel point in the second intermediate image. Alternatively, if the first pixel value is located between the third luminance threshold value A1 and the fourth luminance threshold value B1, the first pixel value and the second pixel value may be weighted to obtain the target pixel value.
Illustratively, based on the brightness information corresponding to the second intermediate image, the brightness fusion is performed on the first intermediate image and the second intermediate image to obtain a brightness weight fusion image, which may include, but is not limited to: for each pixel point in the brightness weight fusion image, if the first pixel value corresponding to the pixel point in the second intermediate image is smaller than the third brightness threshold A2 (the third brightness threshold A2 is the same as or different from the third brightness threshold A1), the target pixel value corresponding to the pixel point in the brightness weight fusion image can be determined based on the first pixel value. Alternatively, if the first pixel value is greater than the fourth luminance threshold B2 (the fourth luminance threshold B2 is the same as or different from the fourth luminance threshold B1), the target pixel value may be determined based on the second pixel value corresponding to the pixel point in the second intermediate image. Alternatively, if the first pixel value is located between the third luminance threshold value A2 and the fourth luminance threshold value B2, the first pixel value and the second pixel value may be weighted to obtain the target pixel value.
Illustratively, based on a motion region corresponding to a moving vehicle in the first original image, performing motion region fusion on the brightness weight fusion image and the frame difference weight fusion image to obtain a fused image, namely a motion region fusion image. For example, for each pixel point in the fused image, if the pixel point is not located in the motion area, determining a target pixel value corresponding to the pixel point in the fused image based on a first pixel value corresponding to the pixel point in the luminance weight fused image. If the pixel point is positioned in the motion area and the corresponding reference pixel value of the pixel point in the first intermediate image is smaller than a first brightness threshold value, determining a target pixel value based on the first pixel value; if the reference pixel value is larger than the second brightness threshold value, determining a target pixel value based on a second pixel value corresponding to the pixel point in the frame difference weight fusion image; if the reference pixel value is located between the first brightness threshold value and the second brightness threshold value, carrying out weighting operation on the first pixel value and the second pixel value to obtain a target pixel value; and if the moving speed of the moving vehicle in the moving area is smaller than the speed threshold, the weight coefficient of the first pixel value is larger than the weight coefficient of the second pixel value.
As can be seen from the above technical solutions, in the embodiments of the present application, luminance fusion may be performed on a first intermediate image and a second intermediate image based on luminance information corresponding to the first intermediate image, so as to obtain a luminance weight fusion image, frame difference fusion may be performed on the first intermediate image and the second intermediate image based on a displacement difference image between the first intermediate image and the second intermediate image, so as to obtain a frame difference weight fusion image, and motion region fusion may be performed on the luminance weight fusion image and the frame difference weight fusion image based on a motion region corresponding to a first original image, so as to obtain a motion region fusion image. The method combines the displacement difference image and the motion area to fuse, can fully utilize the information of the bright area and the dark area of each image, and fully utilize the useful detail information of the overexposure area and the overexposed area, and the visual effect of the fused image is better. For example, the displacement difference of license plates in different images is judged based on the displacement difference images, and the fused weight is determined through the displacement difference, so that the problem of fusion ghost of the license plates is solved. For example, the fusion is carried out based on the movement area, so that the purposes that license plates and car light halos come from short exposure images and car bodies and surrounding environments come from long exposure images are realized, the ghost problem of a fast-moving car is solved, the car light halos are reduced on the premise that the surrounding environment brightness is not sacrificed, and the dynamic range of the images is improved.
The technical scheme of the embodiment of the application is described below with reference to specific application scenarios.
In order to solve the problem that both the bright area and the dark area cannot be considered, multiple images of the same scene can be acquired through different exposure amounts, and the multiple images are fused into an image with a high dynamic range (also called as an image with a wide dynamic range), and the image can keep information such as colors, details and the like of the bright area and the dark area. Compared with a common image, the image with high dynamic range can provide more dynamic range and image details, and better visual experience is provided for users. Although the high dynamic range image can keep useful information such as colors and details of a bright area and a dark area, the high dynamic range image is obtained by fusing a plurality of images, so that when the images are fused, the information of the bright area and the dark area of each image cannot be fully utilized, the useful detailed information of an overexposed area and an overexposed area can still be lost, and the fused image still has the problems of poor visual effect and the like.
Referring to fig. 2, in the case of the high dynamic range image based on the luminance fusion, in this fusion mode, there is a displacement difference between the fast moving vehicles, such as two outlines above the license plate font. The halation around the car light is not well restrained, and the main reason is that the halation brightness around the car light is close to the surrounding environment brightness, and when the brightness is fused, the brightness around the car light and the surrounding environment brightness are increased or reduced simultaneously, and the halation around the car light is larger if the whole brightness is ensured, so that the halation around the car light is not well restrained.
Aiming at the discovery, in the embodiment of the application, the displacement difference image and the motion area are combined for fusion, so that the information of the bright area and the dark area of each image can be fully utilized, the useful detailed information of the overexposed area and the overexposed area is fully utilized, and the visual effect of the fused image is better. For example, the displacement difference of license plates in different images is judged based on the displacement difference images, and the fused weight is determined through the displacement difference, so that the problem of fusion ghost of the license plates is solved. The motion area is used for fusion, so that the purposes that license plates and car light halos come from short exposure images and car bodies and surrounding environments come from long exposure images are achieved, the ghost problem of a fast moving car is solved, the car light halos are reduced on the premise that the surrounding environment brightness is not sacrificed, and the dynamic range of the images is improved.
The embodiment of the application provides a license plate image processing method, which can be applied to front-end equipment (such as a network camera, an analog camera, a camera and the like) or back-end equipment (such as a server, management equipment, storage equipment and the like), and is shown in fig. 3, and is a flow diagram of the license plate image processing method, wherein the method comprises the following steps:
step 301, acquiring a first original image and a second original image. Wherein the exposure of the first original image may be greater than the exposure of the second original image. For example, the exposure time of the first original image may be longer than the exposure time of the second original image, such as the first original image being a long frame image and the second original image being a short frame image. For another example, the gain of the first original image may be greater than the gain of the second original image.
For example, the first original image and the second original image may each include a license plate of a moving vehicle, and thus, the first original image and the second original image may each be referred to as a license plate image.
Illustratively, the shutter and the gain are automatic exposure parameters, the shutter is used for controlling the photosensitive time, the larger the shutter is, the longer the photosensitive time is, the more the exposure is, and under the same condition, the larger the shutter is, the higher the image brightness is. The gain is used for controlling the amplification amount of the photosensitive pixels, and under the same condition, the larger the gain is, the higher the image brightness is. In summary, the shutter and/or the gain are controlled so that the exposure of the first original image is larger than the exposure of the second original image. For example, the exposure time period (the exposure time period is determined by the shutter) of the first original image is longer than the exposure time period of the second original image, and/or the gain of the first original image is greater than the gain of the second original image. For convenience of description, taking an example in which the exposure time of the first original image is longer than the exposure time of the second original image, that is, the first original image is a long frame image and the second original image is a short frame image.
For example, when the sensor is in the DOL mode or the line by line mode, the sensor may be controlled to perform long-short frame exposure, so as to obtain a long-frame image and a short-frame image, and the long-frame image is recorded as a first original image, and the short-frame image is recorded as a second original image. Because the exposure time length of the long frame image is different from that of the short frame image, and the exposure sequence of the long frame image and the short frame image is different, the brightness difference exists between the long frame image and the short frame image, and the displacement difference exists between the long frame image and the short frame image.
Step 302, performing time domain noise reduction on the first original image to obtain a time domain noise reduced image.
For example, since the first original image is a long frame image, and has a better signal-to-noise ratio, the first original image can be subjected to time domain noise reduction, so as to obtain a time domain noise-reduced image. For example, the 3DNR algorithm is used to perform 3D temporal noise reduction on the first original image, that is, only temporal noise reduction is performed, but spatial noise reduction is not performed, so that spatial details of the first original image are not lost, and at the same time, beat noise is removed.
Step 303, determining a motion area corresponding to the first original image based on the time domain noise-reduced image.
For example, after the temporal noise-reduced image is obtained, an area where the moving vehicle is located may be determined based on the temporal noise-reduced image, that is, an area where the moving vehicle is located in the first original image, and coordinates of the area where the moving vehicle is located, that is, a moving area corresponding to the first original image may be output. For example, when the area where the moving vehicle is located is a rectangular area, 4 vertex coordinates of the rectangular area, or one vertex coordinate (such as upper left corner vertex coordinate, or upper right corner vertex coordinate, or lower left corner vertex coordinate, or lower right corner vertex coordinate) +length and width information is output. When the area where the moving vehicle is located is a circular area, the center point coordinates+radius of the circular area are output.
When determining the area where the moving vehicle is based on the time domain noise-reduced image, the time domain noise-reduced image may be directly analyzed to obtain the area where the moving vehicle is located, or the time domain noise-reduced image may be input to the neural network, and the area where the moving vehicle is located may be output by the neural network, which is not limited.
Step 304, determining an exposure difference ratio based on the exposure amount of the first original image and the exposure amount of the second original image, adjusting the time domain noise-reduced image (such as image alignment adjustment) based on the exposure difference ratio, obtaining an adjusted image, and generating a first intermediate image corresponding to the first original image based on the adjusted image.
Illustratively, since the exposure of the first original image is greater than the exposure of the second original image, if the first original image has a higher exposure time than the second original image, then in order to fuse the first original image and the second original image in the same dimension, the first original image and the second original image need to be aligned so that the first original image is aligned to the dimension of the second original image for fusing.
In order to align the first original image and the second original image, an exposure difference ratio may be determined based on the exposure amount of the first original image and the exposure amount of the second original image, for example, taking a quotient of the exposure amount of the first original image and the exposure amount of the second original image as the exposure difference ratio, taking an exposure period as an example, a quotient of the exposure period of the first original image and the exposure period of the second original image as the exposure difference ratio.
After the exposure difference ratio is obtained, the time-domain noise-reduced image may be adjusted based on the exposure difference ratio, so that the adjusted image is aligned to the dimension of the second original image, that is, the dimension of the adjusted image is the same as the dimension of the second original image, and the alignment may be that the time-domain noise-reduced image is divided by the exposure difference ratio. For example, the pixel value of each pixel in the time domain noise reduced image is divided by the exposure difference ratio.
After the time-domain noise-reduced image is adjusted to obtain an adjusted image, a first intermediate image corresponding to the first original image may be generated based on the adjusted image, for example, the adjusted image may be used as the first intermediate image, or the adjusted image may be processed to obtain the first intermediate image, which is not limited.
In summary, based on steps 302-304, a first intermediate image corresponding to the first original image may be obtained.
And 305, performing spatial domain noise reduction on the second original image to obtain a spatial domain noise-reduced image, and generating a second intermediate image corresponding to the second original image based on the spatial domain noise-reduced image.
For example, since the second original image is a short frame image, the exposure amount of the short frame image is small, such as the exposure time is small, the brightness of the short frame image is dark, and the signal to noise ratio is poor, so that noise reduction is required for the short frame image. And because the brightness of the short frame image is darker, the time domain noise reduction is easy to cause motion judgment errors, and the motion judgment errors lose details, the second original image (namely the short frame image) can be subjected to space domain noise reduction, and the space domain noise-reduced image is obtained. For example, the second original image is subjected to 2D spatial domain noise reduction by adopting a 2DNR algorithm, so that some large-particle noise can be removed, and smaller noise and details are kept.
After the spatial domain noise-reduced image is obtained, a second intermediate image corresponding to the second original image may be generated based on the spatial domain noise-reduced image, for example, the spatial domain noise-reduced image may be used as the second intermediate image, or the spatial domain noise-reduced image may be processed to obtain the second intermediate image, which is not limited.
In summary, based on step 305, a second intermediate image corresponding to the second original image may be obtained.
And 306, carrying out brightness fusion on the first intermediate image and the second intermediate image based on brightness information corresponding to the first intermediate image or the second intermediate image to obtain a brightness weight fusion image.
In one possible implementation manner, the first intermediate image and the second intermediate image may be subjected to luminance fusion based on luminance information corresponding to the first intermediate image, so as to obtain a luminance weight fusion image.
For each pixel in the luminance weight fusion image, taking the pixel (x, y) as an example, the pixel value corresponding to the pixel (x, y) in the first intermediate image is recorded as a first pixel value, the pixel value corresponding to the pixel (x, y) in the second intermediate image is recorded as a second pixel value, the pixel value corresponding to the pixel (x, y) in the luminance weight fusion image is recorded as a target pixel value, and the target pixel value can be determined based on the first pixel value and/or the second pixel value. After the target pixel value corresponding to each pixel point is obtained, the target pixel values corresponding to all the pixel points can be combined to obtain the brightness weight fusion image.
To determine the corresponding target pixel value of the pixel point (x, y) in the luminance weight fusion image, then: if the first pixel value is smaller than the third luminance threshold (which may be empirically configured, such as th 1), a target pixel value corresponding to the pixel point (x, y) is determined based on the first pixel value, for example, the first pixel value is used as the target pixel value corresponding to the pixel point (x, y). Alternatively, if the first pixel value is greater than the fourth luminance threshold (which may be empirically configured, e.g., th2, where the fourth luminance threshold is greater than the third luminance threshold), then the target pixel value corresponding to the pixel (x, y) is determined based on the second pixel value, e.g., the second pixel value is taken as the target pixel value corresponding to the pixel (x, y). Or if the first pixel value is located between the third brightness threshold and the fourth brightness threshold (for example, the first pixel value is not smaller than the third brightness threshold and not larger than the fourth brightness threshold), performing a weighting operation on the first pixel value and the second pixel value to obtain a target pixel value corresponding to the pixel point (x, y).
For example, referring to fig. 4, for a schematic diagram of luminance fusion of a first intermediate image and a second intermediate image, the abscissa Yin represents a first pixel value corresponding to a pixel point (x, y) in the first intermediate image, that is, the first pixel value is used as a reference to determine how to determine a target pixel value corresponding to the pixel point (x, y), and the ordinate Yout represents a target pixel value corresponding to the pixel point (x, y) in the luminance weight fusion image.
Taking a pixel (x, y) as an example for each pixel in the luminance weight fusion image, if a first pixel value Yin corresponding to the pixel (x, y) is smaller than a third luminance threshold th1, a target pixel value Yout corresponding to the pixel (x, y) is ilong, and ilong is a pixel value (first pixel value) corresponding to the pixel (x, y) in the first intermediate image. If the first pixel value Yin corresponding to the pixel point (x, y) is greater than the fourth brightness threshold th2, the target pixel value Yout corresponding to the pixel point (x, y) is Yshort, and Yshort is the pixel value (second pixel value) corresponding to the pixel point (x, y) in the second intermediate image. If the first pixel value Yin corresponding to the pixel point (x, y) is located between the third brightness threshold th1 and the fourth brightness threshold th2, the target pixel value Yout corresponding to the pixel point (x, y) is ybend, and ybend is the weighted pixel value.
By way of example, ybend may be calculated using the following formula: ybend=w1+w2×b. Where a represents a first pixel value corresponding to the pixel point (x, y) in the first intermediate image, b represents a second pixel value corresponding to the pixel point (x, y) in the second intermediate image, ybend represents a target pixel value corresponding to the pixel point (x, y) in the luminance weight fusion image, w1 represents a weight coefficient of the first pixel value, w2 represents a weight coefficient of the second pixel value, and w1 and w2 may be empirically configured without limitation. For example, the sum of w1 and w2 may be 1, w1 may be greater than w2, w1 may be equal to w2, and w1 may be less than w2.
For example, if the first pixel value corresponding to the pixel point (x, y) is smaller than (th1+th2)/2, w1 may be greater than w2, and the closer the first pixel value corresponding to the pixel point (x, y) is to th1, the greater w1 is. If the first pixel value corresponding to the pixel point (x, y) is equal to (th1+th2)/2, w1 may be equal to w2. If the first pixel value corresponding to the pixel point (x, y) is greater than (th1+th2)/2, w1 may be smaller than w2, and the closer the first pixel value corresponding to the pixel point (x, y) is to th2, the greater w2 is.
In another possible implementation manner, the luminance fusion may be performed on the first intermediate image and the second intermediate image based on the luminance information corresponding to the second intermediate image, so as to obtain a luminance weight fusion image, where the luminance fusion manner is similar to the luminance fusion manner based on the first intermediate image, and the difference is that: the corresponding pixel value of the pixel point (x, y) in the second intermediate image is noted as the first pixel value, and the other processes are similar.
Step 307, based on the displacement difference image between the first intermediate image and the second intermediate image, performing frame difference fusion on the first intermediate image and the second intermediate image to obtain a frame difference weight fusion image.
For example, since the first original image and the second original image have a displacement difference, the first intermediate image and the second intermediate image have a displacement difference, so that in order to make the displaced area come from the same frame data, the long and short frames can be subjected to frame difference fusion, that is, based on the displacement difference image between the first intermediate image and the second intermediate image, the first intermediate image and the second intermediate image are subjected to frame difference fusion, so as to obtain a frame difference weight fusion image.
In one possible implementation, the frame difference fusion may be performed using the following steps:
step 3071, generating a displacement difference image based on the first intermediate image, the second intermediate image, and the noise level corresponding to the first intermediate image, wherein the displacement difference image includes a motion value (motion value) of each pixel point.
For each pixel in the displacement difference image, taking the pixel (x, y) as an example, the pixel value corresponding to the pixel (x, y) in the first intermediate image is recorded as a first pixel value, the pixel value corresponding to the pixel (x, y) in the second intermediate image is recorded as a second pixel value, the pixel value corresponding to the pixel (x, y) in the displacement difference image is recorded as a motion value, and the motion value corresponding to the pixel (x, y) can be determined based on the first pixel value, the second pixel value and the noise level corresponding to the first pixel value. After the motion value corresponding to each pixel point is obtained, the motion values corresponding to all the pixel points are combined to obtain a displacement difference image.
For example, for a pixel (x, y) in the displacement difference image, a motion value corresponding to the pixel (x, y) in the displacement difference image is determined based on a first pixel value corresponding to the pixel (x, y) in the first intermediate image, a second pixel value corresponding to the pixel (x, y) in the second intermediate image, and a noise level corresponding to the first pixel value. Wherein, when the motion value is larger, the displacement difference is larger, and the representative motion is more.
For example, the following formula may be used to determine the corresponding motion value of the pixel point (x, y) in the displacement difference image: motion= |a-b| -noiselevel|, in the above formula, the motion represents a motion value corresponding to the pixel point (x, y) in the displacement difference image, the larger the motion value is, the larger the displacement difference is represented, and when fusion is performed according to the displacement difference, the larger the motion value is, the more short frames are selected. a represents a first pixel value corresponding to the pixel point (x, y) in the first intermediate image, and b represents a second pixel value corresponding to the pixel point (x, y) in the second intermediate image.
noiselevel denotes the noise level corresponding to the first pixel value. In one possible implementation, the noise level may be preconfigured and all pixel values correspond to the same noise level, so that the noise level corresponding to the first pixel value may be obtained. In another possible embodiment, a mapping relationship may be configured in advance, where the mapping relationship is used to represent a relationship between a pixel value and a noise level, and when the pixel value is larger, the noise level corresponding to the pixel value is smaller, and of course, the foregoing is merely an example of the mapping relationship, and the mapping relationship is not limited as long as the relationship between the pixel value and the noise level can be reflected. Based on the above, the mapping relation can be queried through the first pixel value, so as to obtain the noise level corresponding to the first pixel value.
As can be seen from the above, in the present embodiment, when the frame difference is determined, since the influence of noise needs to be considered, a noise estimation variable noise can be introduced, and when the long-short frame difference (i.e. the absolute value of the difference between a and b) is greater than the noise estimation variable noise, a real motion displacement difference is considered between the long-short frames. The noise estimation variable noiselevel is not a fixed value, changes along with the change of the first pixel value, and is different in value under different brightness, so that the displacement difference under different brightness can be better evaluated.
Step 3072, after obtaining the displacement difference image, performing frame difference fusion on the first intermediate image and the second intermediate image based on the displacement difference image to obtain a frame difference weight fusion image, namely, an image after frame difference fusion.
For each pixel in the frame difference weight fusion image, the pixel value corresponding to the pixel (x, y) in the first intermediate image is recorded as a first pixel value, the pixel value corresponding to the pixel (x, y) in the second intermediate image is recorded as a second pixel value, the pixel value corresponding to the pixel (x, y) in the frame difference weight fusion image is recorded as a target pixel value, the pixel value corresponding to the pixel (x, y) in the displacement difference image is recorded as a target motion value, and the target pixel value is determined based on the first pixel value, the second pixel value and the target motion value. After the target pixel value corresponding to each pixel point is obtained, the target pixel values corresponding to all the pixel points can be combined to obtain the frame difference weight fusion image.
Illustratively, to determine the corresponding target pixel value of the pixel point (x, y) in the frame difference weight fusion image, then: if the target motion value corresponding to the pixel point (x, y) in the displacement difference image is smaller than the first displacement difference threshold (which may be configured empirically, such as th 1), determining the target pixel value corresponding to the pixel point (x, y) based on the first pixel value, for example, using the first pixel value as the target pixel value corresponding to the pixel point (x, y). Or if the target motion value corresponding to the pixel point (x, y) in the displacement difference image is greater than the second displacement difference threshold (which can be configured empirically, such as th2, and the second displacement difference threshold is greater than the first displacement difference threshold), determining the target pixel value corresponding to the pixel point (x, y) based on the second pixel value, for example, using the second pixel value as the target pixel value corresponding to the pixel point (x, y). Or if the target motion value of the pixel point (x, y) in the displacement difference image is between the first displacement difference threshold and the second displacement difference threshold (if the target motion value is not less than the first displacement difference threshold and not greater than the second displacement difference threshold), performing a weighting operation on the first pixel value and the second pixel value to obtain a target pixel value corresponding to the pixel point (x, y).
For example, referring to fig. 5, for a schematic diagram of frame difference fusion of the first intermediate image and the second intermediate image, the abscissa motion represents a target motion value corresponding to the pixel point (x, y) in the displacement difference image, that is, the target motion value is taken as a reference, how to determine a target pixel value corresponding to the pixel point (x, y) is determined, and the ordinate Yout represents a target pixel value corresponding to the pixel point (x, y) in the frame difference weight fusion image.
Taking a pixel point (x, y) as an example for each pixel point in the frame difference weight fusion image, if a target motion value motion corresponding to the pixel point (x, y) is smaller than a first displacement difference threshold th1, a target pixel value Yout corresponding to the pixel point (x, y) is ilong, and ilong is a first pixel value corresponding to the pixel point (x, y) in the first intermediate image. If the motion of the target motion value corresponding to the pixel point (x, y) is greater than the second displacement difference threshold th2, the target pixel value Yout corresponding to the pixel point (x, y) is Yshort, and Yshort is the second pixel value corresponding to the pixel point (x, y) in the second intermediate image. If the motion of the target motion value corresponding to the pixel point (x, y) is located between the first displacement difference threshold th1 and the second displacement difference threshold th2, the target pixel value Yout corresponding to the pixel point (x, y) is ybend, and ybend is the weighted pixel value.
By way of example, ybend may be calculated using the following formula: ybend=w1+w2×b. Where a represents a first pixel value corresponding to the pixel point (x, y) in the first intermediate image, b represents a second pixel value corresponding to the pixel point (x, y) in the second intermediate image, ybend represents a target pixel value corresponding to the pixel point (x, y) in the frame difference weight fusion image, w1 represents a weight coefficient of the first pixel value, w2 represents a weight coefficient of the second pixel value, and w1 and w2 may be empirically configured, which is not limited. For example, the sum of w1 and w2 may be 1, w1 may be greater than w2, w1 may be equal to w2, and w1 may be less than w2.
For example, if the target motion value corresponding to the pixel (x, y) is smaller than (th1+th2)/2, w1 may be larger than w2, and the closer the target motion value corresponding to the pixel (x, y) is to th1, the larger w1 may be. If the target motion value corresponding to the pixel (x, y) is equal to (th1+th2)/2, w1 may be equal to w2. If the target motion value corresponding to the pixel (x, y) is greater than (th1+th2)/2, w1 may be smaller than w2, and the closer the target motion value corresponding to the pixel (x, y) is to th2, the greater w2 may be.
Step 308, based on the motion region corresponding to the first original image, performing motion region fusion on the brightness weight fusion image and the frame difference weight fusion image to obtain a motion region fusion image, namely a fused image.
The first intermediate image and the second intermediate image are subjected to brightness fusion to obtain a brightness weight fusion image, and after the first intermediate image and the second intermediate image are subjected to frame difference fusion to obtain a frame difference weight fusion image, the dark place of the brightness weight fusion image has a good signal to noise ratio, but the motion area has the problem of fused double images, the frame difference weight fusion image has no motion double image problem, but the signal to noise ratio of the dark place is poor, so that the brightness weight fusion image and the frame difference weight fusion image can be fused, the dark place of the fused image has a good signal to noise ratio, and the motion area has no fused double images.
For each pixel in the motion region fusion image, the pixel value corresponding to the pixel (x, y) in the luminance weight fusion image is recorded as a first pixel value, the pixel value corresponding to the pixel (x, y) in the frame difference weight fusion image is recorded as a second pixel value, the pixel value corresponding to the pixel (x, y) in the motion region fusion image is recorded as a target pixel value, the pixel value corresponding to the pixel (x, y) in the first intermediate image is recorded as a reference pixel value, and the target pixel value can be determined based on the reference pixel value, the first pixel value and the second pixel value. And after obtaining the target pixel value corresponding to each pixel point, combining the target pixel values corresponding to all the pixel points to obtain the motion region fusion image.
Illustratively, to determine the corresponding target pixel value of the pixel point (x, y) in the motion region fusion image, then: it may be determined whether the pixel (x, y) is located in the motion area, for example, since the motion area corresponding to the first original image, such as the 4 vertex coordinates of the rectangular area, etc., has been obtained in step 303, it may be determined whether the pixel (x, y) is located in the motion area. If the pixel (x, y) is not located in the motion region, a target pixel value corresponding to the pixel (x, y) is determined based on the first pixel value, for example, the first pixel value may be used as the target pixel value corresponding to the pixel (x, y).
If the pixel (x, y) is located in the motion region, then: if the reference pixel value corresponding to the pixel point (x, y) in the first intermediate image is smaller than the first luminance threshold (which may be empirically configured, for example, th 3), the target pixel value corresponding to the pixel point (x, y) may be determined based on the first pixel value, for example, the first pixel value is taken as the target pixel value corresponding to the pixel point (x, y). Alternatively, if the reference pixel value corresponding to the pixel point (x, y) in the first intermediate image is greater than the second luminance threshold (which may be configured empirically, e.g., th4, and the second luminance threshold is greater than the first luminance threshold), the target pixel value corresponding to the pixel point (x, y) may be determined based on the second pixel value, e.g., the second pixel value is taken as the target pixel value corresponding to the pixel point (x, y). Or if the reference pixel value corresponding to the pixel point (x, y) in the first intermediate image is located between the first brightness threshold and the second brightness threshold (for example, the reference pixel value is not smaller than the first brightness threshold and not larger than the second brightness threshold), the first pixel value and the second pixel value may be weighted to obtain the target pixel value corresponding to the pixel point (x, y). When the first pixel value and the second pixel value are weighted, if the moving speed of the moving vehicle in the moving area is greater than the speed threshold value, the weight coefficient of the second pixel value may be greater than the weight coefficient of the first pixel value, and if the moving speed of the moving vehicle is less than the speed threshold value, the weight coefficient of the first pixel value may be greater than the weight coefficient of the second pixel value.
For example, referring to fig. 6, for a schematic diagram of fusing a luminance weight fused image and a frame difference weight fused image, an abscissa Y represents a reference pixel value corresponding to a pixel point (x, Y) in a first intermediate image, that is, the reference pixel value is used as a reference to determine how to determine a target pixel value corresponding to the pixel point (x, Y), and an ordinate Yout represents a target pixel value corresponding to the pixel point (x, Y) in a motion region fused image.
For each pixel in the motion region fusion image, taking the pixel (x, Y) as an example, if the reference pixel value Y corresponding to the pixel (x, Y) is smaller than the first brightness threshold th3, the target pixel value Yout corresponding to the pixel (x, Y) is Yy, and Yy is the first pixel value corresponding to the pixel (x, Y) in the brightness weight fusion image. If the reference pixel value Y corresponding to the pixel point (x, Y) is greater than the second brightness threshold th4, the target pixel value Yout corresponding to the pixel point (x, Y) is the motion, and the motion is the second pixel value corresponding to the pixel point (x, Y) in the frame difference weight fusion image. If the reference pixel value Y corresponding to the pixel point (x, Y) is located between the first brightness threshold th3 and the second brightness threshold th4, the target pixel value Yout corresponding to the pixel point (x, Y) is ybend, and ybend may be the weighted pixel value.
By way of example, ybend may be calculated using the following formula: ybend=w1+w2×b. Where a represents a first pixel value corresponding to a pixel point (x, y) in the luminance weight fusion image, b represents a second pixel value corresponding to a pixel point (x, y) in the frame difference weight fusion image, ybend represents a target pixel value corresponding to a pixel point (x, y) in the motion region fusion image, w1 represents a weight coefficient of the first pixel value, w2 represents a weight coefficient of the second pixel value, and w1 and w2 may be empirically configured, which is not limited. For example, the sum of w1 and w2 is 1, w1 may be greater than w2, w1 may be equal to w2, and w1 may be less than w2.
For example, when the first pixel value and the second pixel value are weighted, the fused weight may be determined according to the fused ghost performance, for example, if the moving speed of the moving vehicle in the moving area is greater than the speed threshold (such as the vehicle speed is faster), the weight coefficient w2 of the second pixel value may be greater than the weight coefficient w1 of the first pixel value, so that the weight may be greater than the pixel value of the frame difference weight fused image. If the moving speed of the moving vehicle in the moving area is smaller than the speed threshold (such as a slower vehicle speed), the weight coefficient w1 of the first pixel value may be larger than the weight coefficient w2 of the second pixel value, so that the pixel value of the luminance weight fusion image is more selected by the weight. If the moving speed of the moving vehicle in the moving area is equal to the speed threshold (e.g., the vehicle speed is moderate), the weight coefficient w1 of the first pixel value may be equal to the weight coefficient w2 of the second pixel value.
Step 309, generating a target image to be output based on the motion region fusion image.
In one possible implementation manner, the moving region fused image may be used as the target image, or, since the alignment operation of the same dimension is performed before the fusion operation is performed, the alignment operation is referred to in step 304, after the moving region fused image is obtained, the moving region fused image may be further subjected to a dynamic range lifting operation to obtain the target image, for example, a local brightness lifting manner is adopted to lift the brightness of the moving region fused image to the brightness corresponding to the first original image, so as to obtain the target image, which is not limited in this process. After the target image is obtained, the target image is an image of a high dynamic range, and is a final output image.
According to the technical scheme, in the embodiment of the application, the displacement difference image and the motion area are combined for fusion, so that the information of the bright area and the dark area of each image can be fully utilized, the useful detailed information of the overexposed area and the overexposed area is fully utilized, and the visual effect of the fused image is better. For example, the displacement difference of license plates in different images is judged based on the displacement difference images, and the fused weight is determined through the displacement difference, so that the problem of fusion ghost of the license plates is solved. For example, the fusion is carried out based on the movement area, so that the purposes that license plates and car light halos come from short exposure images and car bodies and surrounding environments come from long exposure images are realized, the ghost problem of a fast-moving car is solved, the car light halos are reduced on the premise that the surrounding environment brightness is not sacrificed, and the dynamic range of the images is improved. The first original image is subjected to 3D noise reduction, so that noise can be effectively removed, the signal to noise ratio after wide dynamic fusion is improved, meanwhile, the motion area is used for follow-up use, the problem of double image of the wide dynamic fusion of a fast moving vehicle is solved, the vehicle light halo is reduced on the premise that the brightness of surrounding environment is not sacrificed on the basis of real motion area fusion, and the dynamic range of the image is improved. The method comprises the steps of judging the displacement difference of license plates in the fused long and short frames by utilizing a fusion mode of frame differences of the long and short frames, namely subtracting a value of the short frame from the long frame to represent the frame difference, determining the fusion weight according to the displacement difference, so that the problem of double image fusion of the license plates is solved.
Based on the same application concept as the above method, an embodiment of the present application provides a license plate image processing device, as shown in fig. 7, which is a schematic structural diagram of the license plate image processing device, where the device may include:
an obtaining module 71, configured to obtain a first intermediate image corresponding to a first original image and a second intermediate image corresponding to a second original image, where an exposure of the first original image is greater than an exposure of the second original image; wherein the first original image and the second original image each comprise a license plate of a moving vehicle;
a processing module 72, configured to generate a displacement difference image between the first intermediate image and the second intermediate image based on the first intermediate image, the second intermediate image, and a noise level corresponding to the first intermediate image; based on a displacement difference image between the first intermediate image and the second intermediate image, carrying out frame difference fusion on the first intermediate image and the second intermediate image to obtain a frame difference weight fusion image; and generating a target image to be output based on the frame difference weight fusion image.
Illustratively, the processing module 72 is specifically configured to, when generating the displacement difference image between the first intermediate image and the second intermediate image based on the first intermediate image, the second intermediate image, and the noise level corresponding to the first intermediate image: determining a motion value corresponding to each pixel point in the displacement difference image based on a first pixel value corresponding to the pixel point in the first intermediate image, a second pixel value corresponding to the pixel point in the second intermediate image and a noise level corresponding to the first pixel value for each pixel point in the displacement difference image; wherein the larger the motion value, the larger the displacement difference is represented; the noise level is obtained by querying a mapping relation between the first pixel value, the mapping relation representing a relation between the pixel value and the noise level.
Illustratively, the processing module 72 performs frame difference fusion on the first intermediate image and the second intermediate image based on the displacement difference image between the first intermediate image and the second intermediate image, and is specifically configured to: for each pixel point in the frame difference weight fusion image, if the target motion value corresponding to the pixel point in the displacement difference image is smaller than a first displacement difference threshold value, determining the target pixel value corresponding to the pixel point in the frame difference weight fusion image based on the first pixel value corresponding to the pixel point in the first intermediate image; if the target motion value is greater than a second displacement difference threshold value, determining a target pixel value based on a second pixel value corresponding to the pixel point in the second intermediate image; and if the target motion value is between the first displacement difference threshold value and the second displacement difference threshold value, carrying out weighted operation on the first pixel value and the second pixel value to obtain a target pixel value.
The acquiring module 71 is specifically configured to, when acquiring a first intermediate image corresponding to a first original image and a second intermediate image corresponding to a second original image: performing time domain noise reduction on the first original image to obtain a time domain noise reduced image; determining an exposure difference ratio based on the exposure of the first original image and the exposure of the second original image, and adjusting the time domain noise-reduced image based on the exposure difference ratio to obtain an adjusted image; generating a first intermediate image corresponding to the first original image based on the adjusted image; performing spatial domain noise reduction on the second original image to obtain a spatial domain noise-reduced image; and generating a second intermediate image corresponding to the second original image based on the spatial domain noise-reduced image.
Illustratively, the processing module 72 is specifically configured to, when generating the target image to be output based on the frame difference weight fusion image: based on brightness information corresponding to the first intermediate image or the second intermediate image, carrying out brightness fusion on the first intermediate image and the second intermediate image to obtain a brightness weight fusion image; and fusing the brightness weight fused image and the frame difference weight fused image to obtain a fused image, and generating the target image to be output based on the fused image.
Illustratively, the processing module 72 fuses the luminance weight fused image and the frame difference weight fused image, and is specifically configured to: acquiring a time domain noise-reduced image corresponding to a first original image, and acquiring a motion area corresponding to a motion vehicle in the first original image based on the time domain noise-reduced image; and based on a motion region corresponding to the motion vehicle in the first original image, performing motion region fusion on the brightness weight fusion image and the frame difference weight fusion image to obtain a fused image.
For example, the processing module 72 performs motion region fusion on the luminance weight fusion image and the frame difference weight fusion image based on the motion region corresponding to the moving vehicle in the first original image, so as to obtain a fused image specifically for: for each pixel point in the fused image, if the pixel point is not located in the motion area, determining a target pixel value corresponding to the pixel point in the fused image based on a first pixel value corresponding to the pixel point in the brightness weight fused image; if the pixel point is located in the motion area and the reference pixel value corresponding to the pixel point in the first intermediate image is smaller than a first brightness threshold value, determining a target pixel value based on the first pixel value; if the reference pixel value is larger than a second brightness threshold value, determining a target pixel value based on a second pixel value corresponding to the pixel point in the frame difference weight fusion image; if the reference pixel value is located between the first brightness threshold value and the second brightness threshold value, carrying out weighting operation on the first pixel value and the second pixel value to obtain a target pixel value; and if the moving speed of the moving vehicle is smaller than the speed threshold, the weight coefficient of the first pixel value is larger than the weight coefficient of the second pixel value.
Based on the same application concept as the above method, an electronic device is proposed in an embodiment of the present application, and referring to fig. 8, the electronic device may include a processor 81 and a machine-readable storage medium 82, where the machine-readable storage medium 82 stores machine executable instructions that can be executed by the processor 81; the processor 81 is configured to execute machine executable instructions to implement the license plate image processing method disclosed in the above example of the present application.
Based on the same application concept as the above method, the embodiment of the present application further provides a machine-readable storage medium, where a plurality of computer instructions are stored on the machine-readable storage medium, and when the computer instructions are executed by a processor, the license plate image processing method disclosed in the above example of the present application can be implemented.
Wherein the machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information, such as executable instructions, data, or the like. For example, a machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), a solid state drive, any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer entity or by an article of manufacture having some functionality. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Moreover, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (10)

1. A license plate image processing method, the method comprising:
acquiring a first intermediate image corresponding to a first original image and a second intermediate image corresponding to a second original image, wherein the exposure of the first original image is larger than that of the second original image; wherein the first original image and the second original image each comprise a license plate of a moving vehicle;
Generating a displacement difference image between the first intermediate image and the second intermediate image based on the first intermediate image, the second intermediate image, and a noise level corresponding to the first intermediate image;
based on a displacement difference image between the first intermediate image and the second intermediate image, carrying out frame difference fusion on the first intermediate image and the second intermediate image to obtain a frame difference weight fusion image;
and generating a target image to be output based on the frame difference weight fusion image.
2. The method of claim 1, wherein the generating a displacement difference image between the first intermediate image and the second intermediate image based on the first intermediate image, the second intermediate image, and a noise level corresponding to the first intermediate image comprises:
determining a motion value corresponding to each pixel point in the displacement difference image based on a first pixel value corresponding to the pixel point in the first intermediate image, a second pixel value corresponding to the pixel point in the second intermediate image and a noise level corresponding to the first pixel value for each pixel point in the displacement difference image; wherein the larger the motion value, the larger the displacement difference is represented; the noise level is obtained by querying a mapping relation between the first pixel value, the mapping relation representing a relation between the pixel value and the noise level.
3. The method according to claim 1, wherein the performing frame difference fusion on the first intermediate image and the second intermediate image based on the displacement difference image between the first intermediate image and the second intermediate image to obtain a frame difference weight fusion image includes:
for each pixel point in the frame difference weight fusion image, if the corresponding target motion value of the pixel point in the displacement difference image is smaller than a first displacement difference threshold value, determining the corresponding target pixel value of the pixel point in the frame difference weight fusion image based on the corresponding first pixel value of the pixel point in the first intermediate image; if the target motion value is larger than a second displacement difference threshold value, determining a target pixel value based on a second pixel value corresponding to the pixel point in a second intermediate image; and if the target motion value is between the first displacement difference threshold value and the second displacement difference threshold value, carrying out weighted operation on the first pixel value and the second pixel value to obtain a target pixel value.
4. A method according to any one of claims 1-3, wherein the acquiring a first intermediate image corresponding to the first original image and a second intermediate image corresponding to the second original image comprises:
Performing time domain noise reduction on the first original image to obtain a time domain noise reduced image; determining an exposure difference ratio based on the exposure of the first original image and the exposure of the second original image, and adjusting the time domain noise-reduced image based on the exposure difference ratio to obtain an adjusted image; generating a first intermediate image corresponding to the first original image based on the adjusted image;
performing spatial domain noise reduction on the second original image to obtain a spatial domain noise-reduced image; and generating a second intermediate image corresponding to the second original image based on the spatial domain noise-reduced image.
5. A method according to any one of claims 1 to 3, wherein,
the generating the target image to be output based on the frame difference weight fusion image comprises the following steps:
based on brightness information corresponding to the first intermediate image or the second intermediate image, carrying out brightness fusion on the first intermediate image and the second intermediate image to obtain a brightness weight fusion image;
and fusing the brightness weight fused image and the frame difference weight fused image to obtain a fused image, and generating a target image to be output based on the fused image.
6. The method of claim 5, wherein fusing the luminance weight fused image and the frame difference weight fused image to obtain a fused image comprises:
acquiring a time domain noise-reduced image corresponding to the first original image, and acquiring a motion area corresponding to a motion vehicle in the first original image based on the time domain noise-reduced image;
and based on a motion region corresponding to the motion vehicle in the first original image, performing motion region fusion on the brightness weight fusion image and the frame difference weight fusion image to obtain a fused image.
7. The method according to claim 6, wherein the performing motion region fusion on the luminance weight fusion image and the frame difference weight fusion image based on the motion region corresponding to the moving vehicle in the first original image to obtain a fused image includes:
for each pixel point in the fused image, if the pixel point is not located in the motion area, determining a target pixel value corresponding to the pixel point in the fused image based on a first pixel value corresponding to the pixel point in the brightness weight fused image;
If the pixel point is located in the motion area and the corresponding reference pixel value of the pixel point in the first intermediate image is smaller than a first brightness threshold value, determining a target pixel value based on the first pixel value; if the reference pixel value is larger than a second brightness threshold value, determining a target pixel value based on a second pixel value corresponding to the pixel point in the frame difference weight fusion image; if the reference pixel value is located between the first brightness threshold value and the second brightness threshold value, carrying out weighting operation on the first pixel value and the second pixel value to obtain a target pixel value;
and if the moving speed of the moving vehicle is smaller than the speed threshold, the weight coefficient of the first pixel value is larger than the weight coefficient of the second pixel value.
8. A license plate image processing apparatus, characterized by comprising:
the acquisition module is used for acquiring a first intermediate image corresponding to a first original image and a second intermediate image corresponding to a second original image, wherein the exposure of the first original image is larger than that of the second original image; wherein the first original image and the second original image each comprise a license plate of a moving vehicle;
A processing module, configured to generate a displacement difference image between the first intermediate image and the second intermediate image based on the first intermediate image, the second intermediate image, and a noise level corresponding to the first intermediate image; based on a displacement difference image between the first intermediate image and the second intermediate image, carrying out frame difference fusion on the first intermediate image and the second intermediate image to obtain a frame difference weight fusion image; and generating a target image to be output based on the frame difference weight fusion image.
9. The apparatus of claim 8, wherein the processing module is configured to, when generating the displacement difference image between the first intermediate image and the second intermediate image based on the first intermediate image, the second intermediate image, and a noise level corresponding to the first intermediate image, specifically: determining a motion value corresponding to each pixel point in the displacement difference image based on a first pixel value corresponding to the pixel point in the first intermediate image, a second pixel value corresponding to the pixel point in the second intermediate image and a noise level corresponding to the first pixel value for each pixel point in the displacement difference image; wherein the larger the motion value, the larger the displacement difference is represented; the noise level is obtained by inquiring a mapping relation through the first pixel value, and the mapping relation represents the relation between the pixel value and the noise level;
The processing module performs frame difference fusion on the first intermediate image and the second intermediate image based on a displacement difference image between the first intermediate image and the second intermediate image, and is specifically used for obtaining a frame difference weight fusion image: for each pixel point in the frame difference weight fusion image, if the target motion value corresponding to the pixel point in the displacement difference image is smaller than a first displacement difference threshold value, determining the target pixel value corresponding to the pixel point in the frame difference weight fusion image based on the first pixel value corresponding to the pixel point in the first intermediate image; if the target motion value is greater than a second displacement difference threshold value, determining a target pixel value based on a second pixel value corresponding to the pixel point in the second intermediate image; if the target motion value is located between a first displacement difference threshold value and a second displacement difference threshold value, carrying out weighted operation on the first pixel value and the second pixel value to obtain a target pixel value;
the acquiring module is specifically configured to, when acquiring a first intermediate image corresponding to a first original image and a second intermediate image corresponding to a second original image: performing time domain noise reduction on the first original image to obtain a time domain noise reduced image; determining an exposure difference ratio based on the exposure of the first original image and the exposure of the second original image, and adjusting the time domain noise-reduced image based on the exposure difference ratio to obtain an adjusted image; generating a first intermediate image corresponding to the first original image based on the adjusted image; performing spatial domain noise reduction on the second original image to obtain a spatial domain noise-reduced image; generating a second intermediate image corresponding to the second original image based on the spatial domain noise-reduced image;
The processing module is specifically configured to, when generating the target image to be output based on the frame difference weight fusion image: based on brightness information corresponding to the first intermediate image or the second intermediate image, carrying out brightness fusion on the first intermediate image and the second intermediate image to obtain a brightness weight fusion image; fusing the brightness weight fused image and the frame difference weight fused image to obtain a fused image, and generating the target image to be output based on the fused image;
the processing module fuses the brightness weight fused image and the frame difference weight fused image, and is specifically used for obtaining a fused image: acquiring a time domain noise-reduced image corresponding to the first original image, and acquiring a motion area corresponding to a motion vehicle in the first original image based on the time domain noise-reduced image; based on a motion region corresponding to a motion vehicle in the first original image, performing motion region fusion on the brightness weight fusion image and the frame difference weight fusion image to obtain a fused image;
the processing module performs motion region fusion on the brightness weight fusion image and the frame difference weight fusion image based on a motion region corresponding to a motion vehicle in the first original image, and is specifically used for when obtaining a fused image: for each pixel point in the fused image, if the pixel point is not located in the motion area, determining a target pixel value corresponding to the pixel point in the fused image based on a first pixel value corresponding to the pixel point in the brightness weight fused image; if the pixel point is located in the motion area and the reference pixel value corresponding to the pixel point in the first intermediate image is smaller than a first brightness threshold value, determining a target pixel value based on the first pixel value; if the reference pixel value is larger than a second brightness threshold value, determining a target pixel value based on a second pixel value corresponding to the pixel point in the frame difference weight fusion image; if the reference pixel value is located between the first brightness threshold value and the second brightness threshold value, carrying out weighting operation on the first pixel value and the second pixel value to obtain a target pixel value; and if the moving speed of the moving vehicle is smaller than the speed threshold, the weight coefficient of the first pixel value is larger than the weight coefficient of the second pixel value.
10. An electronic device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine executable instructions to implement the method of any of claims 1-7.
CN202310300835.8A 2023-03-23 2023-03-23 License plate image processing method, device and equipment Pending CN116721043A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310300835.8A CN116721043A (en) 2023-03-23 2023-03-23 License plate image processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310300835.8A CN116721043A (en) 2023-03-23 2023-03-23 License plate image processing method, device and equipment

Publications (1)

Publication Number Publication Date
CN116721043A true CN116721043A (en) 2023-09-08

Family

ID=87872194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310300835.8A Pending CN116721043A (en) 2023-03-23 2023-03-23 License plate image processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN116721043A (en)

Similar Documents

Publication Publication Date Title
Park et al. Low-light image enhancement using variational optimization-based retinex model
WO2019105154A1 (en) Image processing method, apparatus and device
CN107220931B (en) High dynamic range image reconstruction method based on gray level mapping
CN108174118B (en) Image processing method and device and electronic equipment
CN109003249B (en) Method, device and terminal for enhancing image details
US8896625B2 (en) Method and system for fusing images
US9600887B2 (en) Techniques for disparity estimation using camera arrays for high dynamic range imaging
US8989484B2 (en) Apparatus and method for generating high dynamic range image from which ghost blur is removed using multi-exposure fusion
US11127117B2 (en) Information processing method, information processing apparatus, and recording medium
CN108234858B (en) Image blurring processing method and device, storage medium and electronic equipment
US20150279006A1 (en) Method and apparatus for reducing noise of image
Hajisharif et al. Adaptive dualISO HDR reconstruction
CN106713887A (en) Mobile terminal, and white balance adjustment method
CN111917991B (en) Image quality control method, device, equipment and storage medium
KR20140118031A (en) Image processing apparatus and method thereof
CN112634183A (en) Image processing method and device
CN111614867A (en) Video denoising method and device, mobile terminal and storage medium
CN114862722B (en) Image brightness enhancement implementation method and processing terminal
CN116433496A (en) Image denoising method, device and storage medium
CN113793257A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112911160B (en) Image shooting method, device, equipment and storage medium
CN111652821A (en) Low-light-level video image noise reduction processing method, device and equipment based on gradient information
US20120141028A1 (en) Method and apparatus for automatic brightness adjustment of image signal processor
CN116721043A (en) License plate image processing method, device and equipment
JP7398939B2 (en) Image processing device and its control method, imaging device, program, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination