CN115456895A - Image acquisition method and device for foggy scene - Google Patents

Image acquisition method and device for foggy scene Download PDF

Info

Publication number
CN115456895A
CN115456895A CN202211081681.XA CN202211081681A CN115456895A CN 115456895 A CN115456895 A CN 115456895A CN 202211081681 A CN202211081681 A CN 202211081681A CN 115456895 A CN115456895 A CN 115456895A
Authority
CN
China
Prior art keywords
image
gradient
sub
overlapping area
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211081681.XA
Other languages
Chinese (zh)
Inventor
路萍萍
袁高阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Mobile Communications Technology Co Ltd
Original Assignee
Hisense Mobile Communications Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Mobile Communications Technology Co Ltd filed Critical Hisense Mobile Communications Technology Co Ltd
Priority to CN202211081681.XA priority Critical patent/CN115456895A/en
Publication of CN115456895A publication Critical patent/CN115456895A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image acquisition method and device for a foggy day scene, which are used for improving the use experience of a user for acquiring images in the foggy day scene. The method provided by the application comprises the following steps: responding to the image acquisition operation of a user of the terminal equipment, and respectively acquiring an infrared image and a visible light image which are obtained by synchronously acquiring a set scene by the infrared camera and the visible light camera; acquiring image parameters of a first overlapping area in the infrared image, which is overlapped with the visible light image, and acquiring image parameters of a second overlapping area in the visible light image, which is overlapped with the infrared image; the image parameters include gradient and/or gray variance; when the set scene is determined to be a foggy scene according to the comparison result of the image parameters of the first overlapping area and the second overlapping area, carrying out defogging treatment on the visible light image according to an atmospheric scattering model to obtain a defogged image; and displaying the defogged image.

Description

Image acquisition method and device for foggy scene
Technical Field
The application relates to the technical field of image acquisition, in particular to an image acquisition method and device for a foggy day scene.
Background
In a foggy scene, the transparency of the atmosphere becomes fuzzy when the image is acquired by the mobile terminal equipment, so that the quality of the image acquired by the mobile terminal equipment is seriously reduced, and the use and analysis of subsequent images are further influenced. In the prior art, when a mobile terminal device acquires an image in a foggy day scene, the image in the foggy day scene is generally stored in a gallery. Further, the user enters a third-party gallery to call the fogged image and carries out defogging operation on the fogged image, and then the image after defogging is obtained. The method causes the defogging operation to be inconvenient, and the user can not obtain an ideal image immediately in the foggy scene, thereby influencing the use experience of the user.
Disclosure of Invention
The embodiment of the application provides an image acquisition method and device for a foggy day scene, which are used for improving the use experience of a user for acquiring images in the foggy day scene.
In a first aspect, an embodiment of the present application provides an image acquisition method for a foggy day scene, which is applied to a terminal device, where the terminal device at least includes an infrared camera and a visible light camera, and includes:
responding to the image acquisition operation of a user of the terminal equipment, and respectively acquiring an infrared image and a visible light image which are obtained by synchronously acquiring a set scene by the infrared camera and the visible light camera; acquiring image parameters of a first overlapping area in the infrared image, which is overlapped with the visible light image, and acquiring image parameters of a second overlapping area in the visible light image, which is overlapped with the infrared image; the image parameters include gradients and/or gray-scale variances; when the set scene is determined to be a foggy scene according to the comparison result of the image parameters of the first overlapping area and the second overlapping area, carrying out defogging treatment on the visible light image according to an atmospheric scattering model to obtain a defogged image; and displaying the defogged image.
Based on the scheme, whether the set scene is a foggy day scene or not can be determined through the infrared image and the visible light image. And when the set scene is determined to be a foggy scene, defogging the visible light image through the atmospheric scattering model, and further obtaining the defogged image. When a user carries out image acquisition in a foggy scene, the acquired image is the image after defogging, and the image with the fog does not need to be called and defogged in a third-party gallery, so that the image after the fog is obtained, the user operation is saved, and the use experience of the user is improved.
In one possible implementation manner, the determining that the set scene is a foggy day scene according to the comparison result between the image parameter of the first overlapping area and the image parameter of the second overlapping area includes: the image parameter is a gradient, and when a first gradient of the first overlapping area is larger than a second gradient of the second overlapping area, the set scene is determined to be a foggy day scene; or, the image parameter is a gray variance, and when a first gray variance of the first overlapping area is greater than a second gray variance of the second overlapping area, the set scene is determined to be a foggy day scene; or the image parameters comprise gradients and gray-scale variances, and when the first gradient of the first overlapping area is larger than the second gradient of the second overlapping area, and the first gray-scale variance of the first overlapping area is larger than the second gray-scale variance of the second overlapping area, the set scene is determined to be a foggy day scene.
Based on the scheme, whether the set scene is a foggy scene or not can be detected through the gradient or the gray variance of the infrared image and the visible image, so that whether the visible image is subjected to defogging processing or not can be determined.
In one possible implementation, the method further comprises obtaining the first gradient and the second gradient by:
calculating a gradient of each pixel in the first overlapping region by a laplacian to obtain the first gradient, and calculating a gradient of each pixel in the second overlapping region by a laplacian to obtain the second gradient; alternatively, the first and second electrodes may be,
dividing the first overlapping area into M sub-image blocks, calculating a gradient of each pixel in each of the M sub-image blocks through a Laplacian operator to obtain the first gradient, dividing the second overlapping area into N sub-image blocks, and calculating a gradient of each pixel in each of the N sub-image blocks through a Laplacian operator to obtain the second gradient.
Based on the scheme, the image can be divided into a plurality of sub-image blocks, so that the influence of objects with different sizes on the gradient is avoided, and the accuracy of the gradient can be improved.
In one possible implementation, the calculating a gradient of each pixel in each of the M sub-image blocks by using a laplacian operator to obtain the first gradient includes: calculating a gradient of each pixel in each sub-image block included in the first overlapping area by using a laplacian operator; taking the sum of the gradients of each pixel in each sub-image block as the gradient of each sub-image block; weighting the gradient of each sub image block to obtain the first gradient;
the calculating, by the laplacian operator, a gradient of each pixel in each of the N sub-image blocks to obtain the second gradient includes: calculating the gradient of each pixel in each sub image block included in the second overlapped area through a Laplace operator; taking the sum of the gradients of each pixel in each sub-image block as the gradient of each sub-image block; and weighting the gradient of each sub image block to obtain the second gradient.
In one possible implementation, the method further includes determining the first gray variance and the second gray variance by: determining a gray mean value of the first overlapping area, and determining the first gray variance according to the gray value of each pixel in the first overlapping area and the gray mean value of the first overlapping area; determining a gray mean value of the second overlapping area, and determining the second gray variance according to the gray value of each pixel in the second overlapping area and the gray mean value of the second overlapping area; or dividing the first overlapping area into M sub-image blocks, determining a gray mean value of each sub-image block in the M sub-image blocks, determining gray variances respectively corresponding to the M sub-image blocks according to a gray value of each pixel in each sub-image block and the gray mean value of each sub-image block, and weighting the gray variances respectively corresponding to the M sub-image blocks to obtain a first gray variance; dividing the second overlapping area into N sub-image blocks, determining a gray mean value of each sub-image block in the N sub-image blocks, determining gray variances respectively corresponding to the N sub-image blocks according to the gray value of each pixel in each sub-image block and the gray mean value of each sub-image block, and weighting the gray variances respectively corresponding to the N sub-image blocks to obtain a second gray variance.
Based on the scheme, the upper half part of the image can have higher weight in a weighting mode, so that the image more conforms to the actual situation, and the accuracy of foggy day detection is improved.
In a possible implementation manner, the defogging processing on the visible light image according to the atmospheric scattering model to obtain a defogged image includes: determining the minimum value of the three color channels of each pixel in the visible light image to obtain a minimum value image, and determining the maximum value of the three color channels of each pixel in the visible light image to obtain a maximum value image; determining a global atmospheric optical parameter according to the minimum value image and the maximum value image; the global atmospheric optical parameter is an average value of a first maximum value in a filtered image obtained after the minimum value image is filtered and a second maximum value in the maximum value image; determining an ambient light parameter according to the configured defogging adjustment parameter and the minimum value image; and defogging the visible light image based on the atmospheric scattering model according to the global atmospheric optical parameter and the environmental optical parameter to obtain a defogged image.
In one possible implementation, the determining an ambient light parameter according to the defogging adjustment parameter and the minimum value image includes: determining an ambient light parameter by the following formula:
H(x)=min(αM b (x),M(x));
wherein H (x) represents an ambient light parameter corresponding to the pixel x, α represents a luminance parameter, α = min (ρ M) b mean Q), where ρ represents the defogging adjustment parameter, M b mean Represents the mean value of all pixels of the minimum value image filtered image, said M b (x) Representing filtered filtering of the minimum value imageA pixel value of pixel x in the wave image, said M (x) representing a pixel value of pixel x in said minimum value image, Q being less than 1 and greater than 0.5.
In a second aspect, an embodiment of the present application provides an image capturing device for a foggy day scene, including:
the acquisition module is used for responding to the image acquisition operation of a user of the terminal equipment and respectively acquiring an infrared image and a visible light image which are obtained by synchronously acquiring a set scene by the infrared camera and the visible light camera; acquiring image parameters of a first overlapping area overlapping with the visible light image in the infrared image, and acquiring image parameters of a second overlapping area overlapping with the infrared image in the visible light image; the image parameters include gradients and/or gray-scale variances;
the processing module is used for performing defogging processing on the visible light image according to an atmospheric scattering model to obtain a defogged image when the set scene is determined to be a foggy scene according to the comparison result of the image parameters of the first overlapping area and the image parameters of the second overlapping area;
and the display module is used for displaying the defogged image.
In a possible implementation manner, when the set scene is determined to be a foggy scene according to the comparison result between the image parameter of the first overlapping area and the image parameter of the second overlapping area, the processing module is specifically configured to: the image parameter is a gradient, and when a first gradient of the first overlapping area is larger than a second gradient of the second overlapping area, the set scene is determined to be a foggy day scene; or, the image parameter is a gray variance, and when a first gray variance of the first overlapping area is greater than a second gray variance of the second overlapping area, the set scene is determined to be a foggy day scene; or, the image parameters include gradients and gray variances, and when the first gradient of the first overlapping area is greater than the second gradient of the second overlapping area and the first gray variance of the first overlapping area is greater than the second gray variance of the second overlapping area, it is determined that the set scene is a foggy day scene.
In a possible implementation manner, the obtaining module is further configured to obtain the first gradient and the second gradient by: calculating a gradient of each pixel in the first overlapping region by a laplacian operator to obtain the first gradient, and calculating a gradient of each pixel in the second overlapping region by a laplacian operator to obtain the second gradient; or, the first overlapping area is divided into M sub-image blocks, the gradient of each pixel in each of the M sub-image blocks is calculated by using a laplacian operator to obtain the first gradient, the second overlapping area is divided into N sub-image blocks, and the gradient of each pixel in each of the N sub-image blocks is calculated by using a laplacian operator to obtain the second gradient.
In a possible implementation manner, when calculating the gradient of each pixel in each of the M sub-image blocks through the laplacian operator to obtain the first gradient, the obtaining module is specifically configured to: calculating a gradient of each pixel in each sub-image block included in the first overlapping area by using a laplacian operator; taking the sum of the gradients of each pixel in each sub-image block as the gradient of each sub-image block; weighting the gradient of each sub image block to obtain the first gradient; the obtaining module is configured to calculate, through a laplacian operator, a gradient of each pixel in each of the N sub-image blocks to obtain the second gradient, and is specifically configured to: calculating the gradient of each pixel in each sub image block included in the second overlapped area through a Laplace operator; taking the sum of the gradients of each pixel in each sub image block as the gradient of each sub image block; and weighting the gradient of each sub image block to obtain the second gradient.
In a possible implementation manner, the obtaining module is further configured to determine the first gray variance and the second gray variance by: determining a gray mean value of the first overlapping area, and determining the first gray variance according to the gray value of each pixel in the first overlapping area and the gray mean value of the first overlapping area; determining a gray mean value of the second overlapping area, and determining the second gray variance according to the gray value of each pixel in the second overlapping area and the gray mean value of the second overlapping area; or dividing the first overlapping area into M sub-image blocks, determining a gray mean value of each sub-image block in the M sub-image blocks, determining gray variances respectively corresponding to the M sub-image blocks according to a gray value of each pixel in each sub-image block and the gray mean value of each sub-image block, and weighting the gray variances respectively corresponding to the M sub-image blocks to obtain the first gray variance; dividing the second overlapping area into N sub-image blocks, determining a gray mean value of each sub-image block in the N sub-image blocks, determining gray variances respectively corresponding to the N sub-image blocks according to the gray value of each pixel in each sub-image block and the gray mean value of each sub-image block, and weighting the gray variances respectively corresponding to the N sub-image blocks to obtain the second gray variance.
In a possible implementation manner, when performing defogging processing on the visible light image according to the atmospheric scattering model to obtain a defogged image, the processing module is specifically configured to: determining the minimum value of the three color channels of each pixel in the visible light image to obtain a minimum value image, and determining the maximum value of the three color channels of each pixel in the visible light image to obtain a maximum value image; determining a global atmospheric optical parameter according to the minimum value image and the maximum value image; the global atmospheric optical parameter is an average value of a first maximum value in a filtered image obtained after the minimum value image is filtered and a second maximum value in the maximum value image; determining an ambient light parameter according to the configured defogging adjustment parameter and the minimum value image; and carrying out defogging treatment on the visible light image based on the atmospheric scattering model according to the global atmospheric optical parameter and the environmental optical parameter to obtain a defogged image.
In a possible implementation manner, when determining the ambient light parameter according to the defogging adjustment parameter and the minimum value image, the processing module is specifically configured to determine the ambient light parameter according to the following formula:
H(x)=min(αM b (x),M(x));
wherein H (x) represents an ambient light parameter corresponding to the pixel x, α represents a luminance parameter, α = min (ρ M) b mean Q), where ρ represents the defogging adjustment parameter, M b mean Represents a mean value of all pixels of the filtered image after the minimum value image filtering, said M b (x) Represents a pixel value of a pixel x in the filtered image after the minimum value image filtering, the M (x) represents a pixel value of a pixel x in the minimum value image, and Q is less than 1 and greater than 0.5.
In a third aspect, an embodiment of the present application provides a terminal device, including:
a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory and executing the methods in the first aspect and the different implementation modes of the first aspect according to the obtained program instructions.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where instructions are stored, and when the instructions are executed on a computer, the instructions cause the computer to perform the method described in the first aspect and the different implementation manners of the first aspect.
In addition, for technical effects brought by any one implementation manner of the second aspect to the fourth aspect, reference may be made to the technical effects brought by the first aspect and different implementation manners of the first aspect, and details are not described here.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic hardware diagram of a terminal device according to an embodiment of the present application;
fig. 2 is a software schematic diagram of a terminal device according to an embodiment of the present application;
fig. 3 is a schematic flowchart of an image acquisition method for a foggy day scene according to an embodiment of the present disclosure;
fig. 4 is a flowchart of a determination method for determining a zoom parameter and a pan parameter according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an infrared image and a visible light image provided by an embodiment of the present application;
FIG. 6 is a diagram illustrating key points provided in an embodiment of the present application;
FIG. 7 is a schematic diagram of another key point provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a zoomed visible light image and an infrared image provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of coordinates of key points provided in an embodiment of the present application;
FIG. 10 is a schematic view of a second overlap region provided by an embodiment of the present application;
fig. 11 is a schematic diagram illustrating a first overlapping area being divided into sub image blocks according to an embodiment of the present application;
FIG. 12 is a schematic diagram of gradients corresponding to sub-image blocks according to an embodiment of the present application;
fig. 13 is a schematic diagram of weights corresponding to sub image blocks according to an embodiment of the present application;
fig. 14 is a flowchart of defogging processing on a visible light image according to an embodiment of the present application;
FIG. 15 is a schematic diagram of an image capture interface provided by an embodiment of the present application;
fig. 16 is a schematic view of an image capturing device for a foggy day scene according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only some embodiments of the present application, and not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
It is noted that relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
In the prior art, when a user acquires an image through a camera of a mobile terminal in a foggy day scene, the acquired foggy image is generally stored, then the foggy image is acquired through third-party software, and then the foggy image is subjected to defogging operation. Based on the method, the user can carry out defogging operation on the image only by third-party software, so that the defogging operation is inconvenient, and the user cannot obtain an ideal image immediately in a foggy scene, thereby influencing the use experience of the user.
Based on the problems, the application provides an image acquisition method, device and equipment for a foggy day scene, an infrared camera and a visible light camera of terminal equipment are used for synchronously acquiring an infrared image and a visible light image of a current scene, then whether the current scene is the foggy day scene is judged according to the visible light image and the infrared image, and when the foggy day scene is determined, defogging operation is directly carried out on the visible light image according to an atmosphere scattering model. The method can improve the photographing experience of the user using the terminal equipment in the foggy scene, the user does not need to enter a third-party gallery to perform defogging operation, the image acquired by the terminal equipment is the defogged image, and the user operation is saved.
The image acquisition method for the foggy day scene can be executed through the terminal equipment. The following describes a terminal device according to an embodiment of the present application. It should be understood that the terminal device 100 shown in fig. 1 is merely an example, and the terminal device 100 may have more or fewer components than shown in fig. 1, may combine two or more components, or may have a different configuration of components. The various components shown in fig. 1 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
A hardware configuration block diagram of the terminal device 100 is exemplarily shown in fig. 1. As shown in fig. 1, the terminal device 100 includes: a Radio Frequency (RF) circuit 110, a memory 120, a display unit 130, a camera 140, a sensor 150, an audio circuit 160, a Wireless Fidelity (Wi-Fi) module 170, a processor 180, a bluetooth module 181, and a power supply 190.
The RF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and may receive downlink data of a base station and then send the downlink data to the processor 180 for processing; the uplink data may be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
The memory 120 may be used to store software programs and data. The processor 180 performs various functions of the terminal device 100 and data processing by executing software programs or data stored in the memory 120. The memory 120 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. The memory 120 stores an operating system that enables the terminal device 100 to operate. The memory 120 may store an operating system and various application programs, and may also store codes for performing the methods described in the embodiments of the present application.
The display unit 130 may be used to receive input numeric or character information and generate signal input related to user settings and function control of the terminal apparatus 100, and specifically, the display unit 130 may include a touch screen 131 disposed on the front surface of the terminal apparatus 100 and may collect touch operations of a user thereon or nearby, such as clicking a button, dragging a scroll box, and the like.
The display unit 130 may also be used to display a Graphical User Interface (GUI) of information input by or provided to the user and various menus of the terminal apparatus 100. Specifically, the display unit 130 may include a display screen 132 disposed on the front surface of the terminal device 100. The display 132 may be a color liquid crystal display, and may be configured in the form of a liquid crystal display, a light emitting diode, or the like.
The touch screen 131 may cover the display screen 132, or the touch screen 131 and the display screen 132 may be integrated to implement the input and output functions of the terminal device 100, and after the integration, the touch screen may be referred to as a touch display screen for short. The display unit 130 in this application can be used to display the image or video captured by the camera 140 in this application.
The camera 140 may be used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing elements convert the light signals into electrical signals which are then passed to the processor 180 for conversion into digital image signals. The camera 140 may include a visible light camera and an infrared camera in the present application.
The terminal device 100 may further comprise at least one sensor 150, such as an acceleration sensor 151, a distance sensor 152, a fingerprint sensor 153, a temperature sensor 154. The terminal device 100 may also be configured with other sensors such as a gyroscope, barometer, hygrometer, thermometer, infrared sensor, light sensor, motion sensor, and the like.
The audio circuitry 160, speaker 161, microphone 162 may provide an audio interface between the user and the terminal device 100. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 161, and convert the electrical signal into a sound signal for output by the speaker 161. The terminal device 100 may also be provided with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 162 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after being received by the audio circuit 160, and outputs the audio data to the RF circuit 110 to be transmitted to, for example, another terminal or outputs the audio data to the memory 120 for further processing.
Wi-Fi belongs to a short-distance wireless transmission technology, and the terminal device 100 can help a user to send and receive e-mails, browse webpages, access streaming media and the like through the Wi-Fi module 170, and provides wireless broadband internet access for the user.
The processor 180 is a control center of the terminal device 100, connects various parts of the entire terminal device using various interfaces and lines, and performs various functions of the terminal device 100 and processes data by running or executing software programs stored in the memory 120 and calling data stored in the memory 120. In some embodiments, processor 180 may include one or more processing units; the processor 180 may also integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a baseband processor, which mainly handles wireless communications. It will be appreciated that the baseband processor described above may not be integrated into the processor 180. In the present application, the processor 180 may run an operating system, an application program, a user interface display, and a touch response, and the processing method described in the embodiments of the present application. Further, the processor 180 is coupled with the display unit 130.
And the bluetooth module 181 is configured to perform information interaction with other bluetooth devices having a bluetooth module through a bluetooth protocol. For example, the terminal device 100 may establish a bluetooth connection with a wearable electronic device (e.g., a smart watch) that is also equipped with a bluetooth module through the bluetooth module 181, so as to perform data interaction.
The terminal device 100 also includes a power supply 190 (such as a battery) for powering the various components. The power supply may be logically connected to the processor 180 through a power management system to manage charging, discharging, power consumption, etc. through the power management system. The terminal device 100 may further be configured with a power button for powering on and off the terminal, and locking the screen.
Fig. 2 is a block diagram of a software configuration of the terminal device 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android System is divided into four layers, which are an application layer (System Apps), an application Framework layer (Java API Framework), a System runtime layer (Native), and a Kernel layer (Linux Kernel), from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 2, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide the communication function of the terminal device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, a prompt tone is given, the mobile terminal vibrates, an indicator light flashes, and the like.
The system operation library layer is divided into two parts: android Runtime (Android Runtime) and system libraries. The Android Runtime comprises a core library and a virtual machine. The Android Runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface Manager (Surface Manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
The kernel layer is a layer between hardware and software. The kernel layer provides core system services such as security, memory management, process management, network protocol stack, and driver models. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
It should be noted that, the configuration of different terminal devices may be different, and therefore, the above-mentioned fig. 1-fig. 2 are only used as an exemplary description, and the present application is not limited to this.
The embodiment of the application provides an image acquisition method for a foggy day scene, and fig. 3 exemplarily shows a flow of the image acquisition method for the foggy day scene. The process may be executed by an image capturing apparatus, which may be located in the terminal device 100 as described in fig. 1, for example, the processor 180 may also be the terminal device 100. Taking an execution device as the processor 180 as an example, the following description of the processor 180 does not exemplify a numeric identifier for convenience of description. The specific process is as follows:
301, in response to an image acquisition operation of a user of the terminal device, acquiring an infrared image and a visible light image obtained by synchronously acquiring a set scene by an infrared camera and a visible light camera respectively.
In some embodiments, the terminal device includes an infrared camera and a visible light camera. The infrared camera is not influenced by weather conditions and visible light, and can obtain a relatively clear infrared image. For example, under severe weather conditions such as rainy days, snowy days or haze, the infrared camera can still obtain clear infrared images.
In some embodiments, when a user adopts a terminal device to perform image acquisition on a set scene, an infrared image and a visible light image obtained by synchronously acquiring the set scene by an infrared camera and a visible light camera are respectively obtained in response to the image acquisition operation of the terminal device. Wherein the infrared image and the visible image coincide.
And 302, acquiring image parameters of a first overlapping area overlapping with the visible light image in the infrared image, and acquiring image parameters of a second overlapping area overlapping with the infrared image in the visible light image.
In some scenarios, when the infrared image and the visible light image are completely overlapped, the image parameter of the infrared image may be acquired as the image parameter of the first overlapped area, and the image parameter of the visible light image may be acquired as the image parameter of the second overlapped area. Wherein the image parameters comprise gradients and/or gray-scale variances.
In other scenes, when the infrared image and the visible light image are not completely overlapped, image parameters of a first overlapped area overlapped with the visible light image in the infrared image are acquired, and image parameters of a second overlapped area overlapped with the infrared image in the visible light image are acquired.
In some embodiments, the field of view angle of the visible light image comprises the field of view angle of the infrared image. The infrared image is a first overlapping area overlapping with the visible light image, and a second overlapping area overlapping with the infrared image in the visible light image can be determined by the scaling parameter and the translation parameter.
In some embodiments, when the image parameter is a gradient, a first gradient of a first overlapping region in the infrared image overlapping the visible light image may be acquired, and a second gradient of a second overlapping region in the visible light image overlapping the infrared image may be acquired.
In other embodiments, when the image parameter is a gray scale variance, a first gray scale variance of a first overlapping region of the infrared image that overlaps the visible image may be obtained, and a second gray scale variance of a second overlapping region of the visible image that overlaps the infrared image may be obtained.
In some embodiments, when the image parameters are a gradient and a gray variance, a first gradient and a first gray variance of a first overlapping region overlapping the visible light image in the infrared image may be obtained, and a second gradient and a second gray variance of a second overlapping region overlapping the infrared image in the visible light image may be obtained.
303, when the setting scene is determined to be a foggy scene according to the comparison result of the image parameters of the first overlapping area and the second overlapping area, performing defogging processing on the visible light image according to the atmospheric scattering model to obtain a defogged image.
In some scenes, the image parameter is a gradient, and when a first gradient of the first overlapping area is greater than a second gradient of the second overlapping area, the scene is determined to be a foggy day scene. In other scenes, the image parameter is a gray variance, and when the first gray variance of the first overlapping area is larger than the second gray variance of the second overlapping area, the setting scene is determined to be a foggy day scene. In some scenarios, the image parameters include a gradient and a gray variance, and when a first gradient of the first overlapping region is greater than a second gradient of the second overlapping region and the first gray variance of the first overlapping region is greater than the second gray variance of the second overlapping region, the setting of the scene as a foggy day scene is determined.
In some embodiments, when the set scene is determined to be a foggy day scene, the visible light image may be defogged according to the atmospheric scattering model to obtain a defogged image. Specifically, the minimum value of the three color channels of each pixel in the visible light image may be determined to obtain a minimum value image, and the maximum value of the three color channels of each pixel in the visible light image may be determined to obtain a maximum value image. Further, a global atmospheric light parameter may be determined from the minimum value image and the maximum value image. And determining the ambient light parameters according to the configured defogging adjustment parameters and the minimum value image. And carrying out defogging treatment on the visible light image based on the atmospheric scattering model according to the global atmospheric optical parameter and the environmental optical parameter to obtain a defogged image.
And 304, displaying the defogged image.
Based on the scheme, whether the set scene is a foggy scene or not can be determined according to the visible light image and the infrared image, and then the image is defogged and displayed according to the atmospheric scattering model, so that the image collected by the user in the foggy scene is the defogged image, the user is prevented from entering a third-party gallery to perform defogging operation, and the user experience is improved.
In some embodiments, when the infrared image and the visible light image obtained by synchronously acquiring the set scene by the infrared camera and the visible light camera in the terminal device are not completely overlapped, the second overlapped area overlapped with the infrared image in the visible light image may be determined based on the infrared image. Wherein the second overlapping area is determined by the visible light image through a scaling parameter and a translation parameter. Specifically, referring to fig. 4, the scaling parameter and the translation parameter may be determined as follows:
401, respectively acquiring a first infrared image and a first visible light image obtained by synchronously acquiring a first object by an infrared camera and a visible light camera.
In some embodiments, the first object may be synchronously captured by the infrared camera and the visible light camera to obtain the first infrared image and the first visible light image. As an example, after the first object is subjected to image acquisition, a first infrared image and a first visible light image are obtained as shown in fig. 5. The field angle of the visible light image is not consistent with that of the infrared image, and the field angle of the visible light image is larger than that of the infrared image.
The pixel coordinates of the set keypoints of the first object in the first infrared image and the pixel coordinates of the set keypoints of the first object in the first visible-light image are determined 402.
In some embodiments, the pixel coordinates of the set keypoints of the first object in the first infrared image and the pixel coordinates of the set keypoints of the first object in the first visible-light image can be extracted. Specifically, as shown in fig. 6, two key points I of the display screen in the infrared image may be extracted 1(x,y) 、I 2(x,y) And two of the display screens in the visible light imageKey point V 1(x,y) 、V 2(x,y)
And 403, determining a scaling parameter of the first visible-light image relative to the first infrared image according to the pixel coordinates of the set key points of the first object in the first infrared image and the pixel coordinates of the set key points of the first object in the first visible-light image.
In some embodiments, after extracting the pixel coordinates of the set keypoints of the first object in the first infrared image and the pixel coordinates of the keypoints of the first object in the first visible-light image, the scaling parameter of the first visible-light image relative to the first infrared image may be determined as follows. Following the above example, after extracting the pixels of the display screen with the set key points, the width of the display screen in the infrared image and the width of the display screen in the visible image satisfy the conditions shown in the following formula:
I width =I 2,y -I 1,y
V width =V 2,y -V 1,y
wherein, I width Width in the infrared image, I, representing the width of the display screen 2,y Indicating a set Key point I 2(x,y) Y coordinate of (a), I 1,y Indicating a set Key point I 1(x,y) Y coordinate of (V) width Width, V, in the visible image representing the width of the display screen 2,y Indicates the set key point V 2(x,y) Y coordinate of (a), V 1,y Indicates the set key point V 1(x,y) The y-coordinate of (a).
Further, the scaling parameter of the first visible-light image relative to the first infrared image satisfies the condition shown in the following formula:
Figure BDA0003833517710000111
where S denotes a scaling parameter (scale scaling factor).
In some scenarios, to improve the accuracy of the scaling parameters, multiple pairs of keypoint pixel coordinates may be extracted and images of the keypoints may be passed throughThe pixel coordinates determine a scaling parameter of the first visible light relative to the first infrared image. Specifically, as shown in fig. 7, two pairs of set key points I of the display screen in the infrared image may be extracted 1(x,y) 、I 2(x,y) 、I 3(x,y) 、I 4(x,y) And two pairs of set key points V of the display screen in the visible light image 1(x,y) 、V 2(x,y) 、V 3(x,y) 、V 4(x,y) . Further, the length and width of the display screen in the infrared image, and the length and width of the display screen in the visible light image may be determined, respectively:
I width,1 =I 2,y -I 1,y ;V width,1 =V 2,y -V 1,y
I width,2 =I 4,y -I 3,y ;V width,2 =V 4,y -V 3,y
I height,1 =I 3,x -I 1,x ;V height,1 =V 3,x -V 1,x
I height,2 =I 4,x -I 2,x ;V height,2 =V 4,x -V 2,x
wherein, I width,1 ,I width,2 Indicating the width, V, of the display screen in the first infrared image width,1 ,V width,2 Indicating the width of the display screen in the first visible-light image, I height,1 ,I height,2 Indicating a high, V, of the display screen in the first infrared image height,1 ,V height,2 Indicating that the display screen is high in the first visible-light image.
Further, the scaling parameter of the first visible-light image relative to the first infrared image satisfies a condition shown in the following formula:
Figure BDA0003833517710000121
where S denotes a scaling parameter.
And 404, zooming the first visible light image according to the zooming parameters to obtain a second visible light image, and determining the pixel coordinates of the set key points of the first object in the second visible light image.
In some embodiments, the pixel coordinates of the set keypoints of the first object in the second visible-light image satisfy the condition shown in the following formula:
V′ 1 =V 1(x,y) *S;
V′ 2 =V 2(x,y) *S;
wherein, V' 1 ,V′ 2 Pixel coordinates representing a set key for the first object in the second visible-light image.
And 405, determining translation parameters of the second visible light image according to the pixel coordinates of the set key points of the first object in the second visible light image and the pixel coordinates of the set key points in the first infrared image.
In some embodiments, the translation parameter satisfies the condition shown in the following formula:
Δx=I 1,x -V′ 1,x
Δy=I 1,y -V′ 1,y
where Δ x represents the translation parameter in the x-direction, Δ y represents the translation parameter in the y-direction, I 1,x Representing a keypoint I in a first infrared image 1(x,y) X coordinate of (2), V' 1,x Represents V' 1 X coordinate of (2), I 1,y Representing a keypoint I in a first infrared image 1(x,y) Y coordinate of (2), V' 1,y Represents V' 1 The y-coordinate of (a).
In some embodiments, when the zoom parameter is determined by a plurality of pairs of set keypoints, the pan parameter satisfies a condition shown by the following formula:
Figure BDA0003833517710000122
Figure BDA0003833517710000123
where Δ x represents the translation parameter in the x-direction and Δ y is shownIndicating the translation parameter in the y-direction, I i,x Representing the x-coordinate, I, of the ith set keypoint in the first infrared image i,y Y coordinate, V, representing the ith set key point in the first infrared image i,x X coordinate, V, representing the ith set keypoint in the first visible light image i,y And the y coordinate of the ith set key point in the first visible light image is represented, and S represents a scaling parameter.
As an example, fig. 8 shows a second visible light image and a first infrared image obtained by scaling the first visible light image. And the size of the first object in the second visible light image is the same as that of the first object in the first infrared image. Further, the second visible light image is translated by the translation parameter to obtain a second overlapping area overlapping the first infrared image. For example, the coordinates of the set key points in the first infrared image and the coordinates of the set key points in the second visible light image are shown in fig. 9, and the second visible light image may be translated according to the translation parameters to obtain a second overlapping area overlapping with the first infrared image, as shown in fig. 10.
In some embodiments, in step 303, when the image parameter is a gradient, determining whether the setting scene is a foggy scene may specifically be implemented in the following two ways:
in a possible implementation manner, when the first gradient and the second gradient are obtained by calculating a gradient of each pixel in the first overlapping region and the second overlapping region respectively through a laplacian operator, the gradient of each pixel in the first overlapping region may be accumulated to obtain a first gradient of the first overlapping region, and the gradient of each pixel in the second overlapping region may be accumulated to obtain a second gradient of the second overlapping region. The first gradient and the second gradient satisfy a condition shown by the following formula:
D ir1 (f)=∑ yx |G ir1 (x,y)|;
D visible1 (f)=∑ yx |G ivisible1 (x,y)|;
wherein D is ir1 (f) Denotes the first gradient, D visible1 (f) Watch (A)Showing a second gradient, G ir1 (x, y) denotes a gradient corresponding to a pixel of index (x, y) in the first overlap region, G ivisible1 Indicating the gradient corresponding to the pixel with index (x, y) in the second overlapping area.
Further, when D ir1 (f)>D visible1 (f) And then, determining that the set scene is a foggy day scene. When D is ir1 (f)≤D visible1 (f) And if so, setting the scene not to be a foggy day scene.
In some embodiments, the gradient of each pixel in the first overlap region may be weighted to determine a first gradient of the first overlap region, and the gradient of each pixel in the second overlap region may be weighted to determine a second gradient of the second overlap region.
In another possible implementation, the first overlapping area may be divided into M sub image blocks, as shown in fig. 11. The gradient of each pixel in each of the M sub-image blocks is calculated by the laplacian operator to obtain a first gradient. Specifically, the sum of the gradients of each pixel in each of the M sub-image blocks included in the first overlapping area may be taken as the gradient of each sub-image block. Wherein the gradient of each of the M sub-image blocks is as shown in fig. 12. Further, the gradients of each sub image block in the first overlapping area may be accumulated to obtain a first gradient. Similarly, the second overlapping area may be divided into N sub-image blocks, and the gradient of each pixel in each of the N sub-image blocks is calculated by the laplacian operator, respectively, to obtain the second gradient. Specifically, the sum of the gradients of each pixel in each sub-image block in the N sub-images included in the second overlapping area may be taken as the gradient of each sub-image block. Further, the gradients of each sub image block in the second overlapping area may be accumulated to obtain a second gradient. As an example, when M = N, the first overlapping area and the second overlapping area include the same number of sub image blocks, and the first gradient and the second gradient satisfy the condition shown in the following formula:
Figure BDA0003833517710000141
Figure BDA0003833517710000142
wherein D is ir1 (f) Denotes the first gradient, D visible1 (f) Denotes the second gradient, D iri (f) Representing the gradient of the ith sub-image block in the first overlapping region, D visiblei (f) Representing the gradient of the ith sub-image block in the second overlapping region.
In other embodiments, when the first gradient and the second gradient are obtained by calculating, by using a laplacian operator, a gradient of each pixel in M sub-image blocks included in the first overlapping region and N sub-image blocks included in the second overlapping region, respectively, it is also possible to weight a sum of gradients of each sub-image block in the M sub-images included in the first overlapping region to obtain the first gradient, and weight a sum of gradients of each sub-image block in the N sub-images included in the second overlapping region to obtain the second gradient. As an example, when M = N, the first overlapping area and the second overlapping area include the same number of sub image blocks, and the first gradient and the second gradient satisfy the condition shown in the following formula:
Figure BDA0003833517710000143
Figure BDA0003833517710000144
wherein D is ir1 (f) Denotes the first gradient, D visible1 (f) Denotes the second gradient, D iri (f) Representing the gradient of the ith sub-image block in the first overlapping region, D visible i (f) Representing the gradient of the ith sub-image block in the second overlapping region. Omega i Representing the weight of the ith sub-image block.
In some embodiments, ω is i In relation to the position of the sub image block in the first overlap area or the second overlap area.In some scenarios, ω i = height-x, where height represents the number of rows of sub image blocks, and x represents the number of rows of the ith sub image block in the sub image block. Because the difference of the upper half part of the image in the foggy day scene is larger, the upper half part can be weighted more based on the scheme. As an example, as shown in fig. 13, the G-th sub image block is located above the K-th sub image block, where G and K are both positive integers smaller than N. The weight ω corresponding to the G-th sub-image block G Weight omega corresponding to sub image block larger than Kth sub image block K
In some embodiments, in 303, when the image parameter is a gradient, the determining whether the set scene is a foggy scene may be specifically implemented in two ways:
in a possible implementation manner, a gray mean of the first overlapping region may be determined, and a first gray variance may be determined according to the gray value of each pixel in the first overlapping region and the gray mean of the first overlapping region, and a gray mean of the second overlapping region may be determined, and a second gray variance may be determined according to the gray value of each pixel in the second overlapping region and the gray mean of the second overlapping region. As an example, the first gray variance satisfies a condition shown in the following formula:
D ir2 (f)=∑ yx |f ir (x,y)-μ ir | 2
wherein D is ir2 (f) A first gray variance, f, representing a first overlap region ir (x, y) represents the gray scale of the pixel with index (x, y) in the first overlap region, μ ir Representing the mean value of the gray levels of the first overlapping area.
The second gray-scale variance of the second overlap region satisfies a condition shown by the following formula:
D visible2 (f)=∑ yx |f visible (x,y)-μ visible | 2
wherein D is visible2 (f) A second gray-scale variance, f, representing a second overlap region visible (x, y) represents the gray scale of the pixel with index (x, y) in the second overlapping area, μ visible The gray level average of the second overlapping area is represented.
In some scenarios, when D ir (f)>D visible (f) And then, determining that the set scene is a foggy day scene.
In another possible implementation, the first overlapping area may be divided into M sub image blocks, and a mean gray scale value of each of the M sub image blocks may be determined. Further, the gray variances respectively corresponding to the M sub-image blocks may be determined according to the gray value of each pixel in each sub-image block and the gray mean of each sub-image block, and the sum of the gray variances respectively corresponding to the M sub-image blocks may be used as the first gray variance. And dividing the second overlapping area into N sub image blocks, and determining the gray average value of each of the N sub image blocks. Further, the gray variances respectively corresponding to the N sub-image blocks may be determined according to the gray value of each pixel in each sub-image block and the gray mean of each sub-image block, and the sum of the gray variances respectively corresponding to the N sub-image blocks may be used as the second gray variance. As an example, when M = N, the number of sub image blocks included in the first overlapping area and the second overlapping area is the same, the first gray-scale variance of the first overlapping area and the second gray-scale variance of the second overlapping area satisfy the condition shown in the following formula:
Figure BDA0003833517710000151
Figure BDA0003833517710000152
wherein D is ir2 (f) Representing a first grayscale variance, D 'of a first overlap region' iri (f) Representing the gray variance, D, corresponding to the ith sub-image block in the first overlapping region visible2 (f) Representing a second gray variance, D ', of the second overlap region' visiblei (f) And expressing the gray variance corresponding to the ith sub-image block in the second overlapping area.
In some scenarios, the gray variances corresponding to the M sub-image blocks included in the first overlapping area may be weighted to obtain a first gray variance. And weighting the gray variances corresponding to the N sub-image blocks in the second overlapping area to obtain a second gray variance. As an example, when M = N, the number of sub image blocks included in the first overlapping area and the second overlapping area is the same, the first gray-scale variance of the first overlapping area and the second gray-scale variance of the second overlapping area satisfy a condition shown in the following formula:
Figure BDA0003833517710000153
Figure BDA0003833517710000154
wherein, ω is i And representing the weight corresponding to the ith sub image block.
In some embodiments, the sub-image blocks may be sized according to different sized objects in the image. For example, when the object in the image is large, the size of the sub-image block may be 64 × 64. When the object in the image is small, the size of the sub-image block may be 16 × 16.
In some embodiments, in step 303, the defogging process on the visible light image according to the atmospheric scattering model may include the following steps, as shown in fig. 14, specifically as follows:
1401, determining the minimum value of the three color channels of each pixel in the visible light image to obtain a minimum value image, and determining the maximum value of the three color channels of each pixel in the visible light image to obtain a maximum value image.
In some embodiments, the minimum value and the maximum value in the three color channels of each pixel in the visible light image may be determined separately, and the minimum value of all the pixels determines the minimum value image, which may be denoted as M (x) =min c∈{r,g,b} (I c (x) A maximum value of all pixels may determine a maximum value image, which may be denoted as N (x) =max c∈{r,g,b} (I c (x) ). It is composed ofThe minimum value image may be used to characterize a non-sky region, and the maximum value image may be used to characterize a sky region.
And 1402, determining the global atmosphere optical parameter according to the minimum value image and the maximum value image.
In some embodiments, the minimum value image may be filtered to obtain a filtered image. As an example, the minimum value image M can be (x) Mean filtering is performed to obtain a filtered image. Wherein the filtered image can be represented as M b = BoxFilter (M (x)). The mean filtering can make the minimum value image fuzzy, so as to achieve the purpose of removing sharp edge information.
Further, a first maximum in the filtered image and a second maximum in the maximum image may be determined. Wherein the first maximum in the filtered image may be denoted as M b max The second maximum value in the maximum value image may be represented as N max . The global atmospheric optical parameter is an average value of a first maximum value in a filtering image obtained after the minimum value image is filtered and a second maximum value in the maximum value image. As an example, the global atmospheric light parameter may be represented as a = (M) b max +N max )/2。
And 1403, determining an ambient light parameter according to the configured defogging adjustment parameter and the minimum value image.
In some embodiments, after filtering the minimum value image, a mean of pixels in the filtered image may be determined. As an example, the filtered image M may be obtained b Mean value M of all pixels in b mean . Wherein M is b mean The overall gray level and the brightness of the foggy image can be represented. Further, the brightness parameter may be determined according to the pixel mean value and the configured defogging adjustment parameter. For example, the defogging adjustment parameter may be represented by ρ. The larger rho is, the darker the defogged image is, and the more obvious the defogging effect is. The brightness parameter satisfies the condition shown in the following formula: α = min (ρ M) b mean Q). Wherein the brightness parameter alpha can represent the brightness of the defogged imageDark, Q is less than 1 and greater than 0.5.
In some embodiments, the ambient light parameter may be determined by the brightness parameter and the minimum value image, and the ambient light parameter satisfies a condition shown in the following formula:
H(x)=min(αM b (x),M(x));
where H (x) represents an ambient light parameter corresponding to pixel x, α represents a luminance parameter, and M b (x) Representing the pixel value of pixel x in the filtered image and M (x) representing the value of an element in the minimum image.
And 1404, performing defogging treatment on the visible light image based on the atmospheric scattering model according to the global atmospheric optical parameter and the ambient optical parameter to obtain a defogged image.
In some embodiments, the atmospheric scattering model may be expressed as: i (x) = J (x) t (x) + a (1-t (x)). Wherein, I (x) represents a foggy image, J (x) represents an image after defogging, A is a global atmospheric optical parameter, and t (x) is a transmissivity. The above formula can be modified as:
Figure BDA0003833517710000161
further, the transmittance satisfies a condition shown by the following formula:
t(x)=1-H(x)/A;
where t (x) represents the transmittance, H (x) represents the ambient light parameter, and a represents the global atmospheric light parameter.
The defogged image satisfies the condition shown in the following formula:
Figure BDA0003833517710000162
wherein H (x) represents an ambient light parameter, H (x) = min (α M) b (x),M(x))。
In some embodiments, the ambient light parameter may also be determined by: h (x) = M (x) · α, the image after defogging may also be determined by the following formula:
Figure BDA0003833517710000163
where α represents the configured defogging adjustment parameter.
In some embodiments, the defogged image may be displayed on an image capture interface. In some scenarios, when it is determined that the current scenario is a foggy day scenario, a defogging adjustment parameter control may be further displayed on the display interface, so that the user may adjust the defogging degree by triggering the control, as shown in fig. 15.
Based on the same technical concept, the embodiment of the present application provides an image capturing device 1800 for a foggy day scene, which is shown in fig. 16. The apparatus 1800 may perform the steps of the image capturing method for foggy weather scenes, and will not be described in detail herein to avoid repetition. The apparatus 1800 includes an acquisition module 1801, a processing module 1802, and a display module 1803.
An obtaining module 1801, configured to respond to an image acquisition operation of a user of the terminal device, and respectively obtain an infrared image and a visible light image obtained by synchronously acquiring a set scene by the infrared camera and the visible light camera;
acquiring image parameters of a first overlapping area overlapping with the visible light image in the infrared image, and acquiring image parameters of a second overlapping area overlapping with the infrared image in the visible light image; the image parameters include gradient and/or gray variance;
a processing module 1802, configured to perform defogging processing on the visible light image according to an atmospheric scattering model when it is determined that the set scene is a foggy scene according to a comparison result between the image parameter of the first overlapping area and the image parameter of the second overlapping area, so as to obtain a defogged image;
a display module 1803, configured to display the defogged image.
In some embodiments, the processing module 1802, when determining that the set scene is a foggy scene according to the comparison result between the image parameter of the first overlapping area and the image parameter of the second overlapping area, is specifically configured to:
the image parameter is a gradient, and when a first gradient of the first overlapping area is larger than a second gradient of the second overlapping area, the set scene is determined to be a foggy day scene; alternatively, the first and second electrodes may be,
the image parameter is a gray variance, and when a first gray variance of the first overlapping area is larger than a second gray variance of the second overlapping area, the set scene is determined to be a foggy day scene; alternatively, the first and second liquid crystal display panels may be,
the image parameters include a gradient and a gray variance, and when the first gradient of the first overlapping area is greater than the second gradient of the second overlapping area, and the first gray variance of the first overlapping area is greater than the second gray variance of the second overlapping area, the set scene is determined to be a foggy day scene.
In some embodiments, the obtaining module 1801 is further configured to:
obtaining the first gradient and the second gradient by:
calculating a gradient of each pixel in the first overlapping region by a laplacian operator to obtain the first gradient, and calculating a gradient of each pixel in the second overlapping region by a laplacian operator to obtain the second gradient; alternatively, the first and second electrodes may be,
dividing the first overlapping area into M sub-image blocks, calculating a gradient of each pixel in each of the M sub-image blocks through a Laplacian operator to obtain the first gradient, dividing the second overlapping area into N sub-image blocks, and calculating a gradient of each pixel in each of the N sub-image blocks through a Laplacian operator to obtain the second gradient.
In some embodiments, the obtaining module 1801, when calculating the gradient of each pixel in each sub image block of the M sub image blocks through the laplacian operator to obtain the first gradient, is specifically configured to:
calculating a gradient of each pixel in each sub image block included in the first overlapping area by using a laplacian operator;
taking the sum of the gradients of each pixel in each sub-image block as the gradient of each sub-image block;
weighting the gradient of each sub image block to obtain the first gradient;
the obtaining module 1801 calculates a gradient of each pixel in each sub image block of the N sub image blocks through a laplacian operator, to obtain the second gradient, and is specifically configured to:
calculating the gradient of each pixel in each sub image block included in the second overlapped area through a Laplace operator;
taking the sum of the gradients of each pixel in each sub image block as the gradient of each sub image block;
and weighting the gradient of each sub image block to obtain the second gradient.
In some embodiments, the obtaining module 1801 is further configured to determine the first gray variance and the second gray variance by: determining a gray mean value of the first overlapping area, and determining the first gray variance according to the gray value of each pixel in the first overlapping area and the gray mean value of the first overlapping area; determining a gray mean value of the second overlapping area, and determining the second gray variance according to the gray value of each pixel in the second overlapping area and the gray mean value of the second overlapping area; or dividing the first overlapping area into M sub-image blocks, determining a gray mean value of each sub-image block in the M sub-image blocks, determining gray variances respectively corresponding to the M sub-image blocks according to a gray value of each pixel in each sub-image block and the gray mean value of each sub-image block, and weighting the gray variances respectively corresponding to the M sub-image blocks to obtain the first gray variance; dividing the second overlapping area into N sub-image blocks, determining a gray mean value of each sub-image block in the N sub-image blocks, determining gray variances respectively corresponding to the N sub-image blocks according to the gray value of each pixel in each sub-image block and the gray mean value of each sub-image block, and weighting the gray variances respectively corresponding to the N sub-image blocks to obtain the second gray variance.
In some embodiments, when the processing module 1802 performs defogging processing on the visible light image according to the atmospheric scattering model to obtain a defogged image, the processing module is specifically configured to: determining the minimum value of the three color channels of each pixel in the visible light image to obtain a minimum value image, and determining the maximum value of the three color channels of each pixel in the visible light image to obtain a maximum value image; determining a global atmospheric optical parameter according to the minimum value image and the maximum value image; the global atmospheric optical parameter is an average value of a first maximum value in a filtered image obtained after the minimum value image is filtered and a second maximum value in the maximum value image; determining an ambient light parameter according to the configured defogging adjustment parameter and the minimum value image; and defogging the visible light image based on the atmospheric scattering model according to the global atmospheric optical parameter and the environmental optical parameter to obtain a defogged image.
In some embodiments, the processing module 1802, when determining the ambient light parameter from the defogging adjustment parameter and the minimum value image, is specifically configured to:
determining an ambient light parameter by the following formula:
H(x)=min(αM b (x),M(x));
wherein H (x) represents an ambient light parameter corresponding to the pixel x, α represents a luminance parameter, α = min (ρ M) b mean Q), where p represents the defogging adjustment parameter, M b mean Represents a mean value of all pixels of the filtered image after the minimum value image filtering, said M b (x) Represents a pixel value of a pixel x in the filtered image after the minimum value image filtering, wherein M (x) represents a pixel value of a pixel x in the minimum value image, and Q is less than 1 and greater than 0.5.
Based on the same technical concept, embodiments of the present application provide a computer-readable storage medium, which stores instructions that, when executed on a computer, cause the computer to perform any one of the steps of the image acquisition method for the foggy day scene.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. The image acquisition method for the foggy day scene is characterized by being applied to terminal equipment, wherein the terminal equipment at least comprises an infrared camera and a visible light camera, and the image acquisition method comprises the following steps:
responding to the image acquisition operation of a user of the terminal equipment, and respectively acquiring an infrared image and a visible light image which are obtained by synchronously acquiring a set scene by the infrared camera and the visible light camera;
acquiring image parameters of a first overlapping area overlapping with the visible light image in the infrared image, and acquiring image parameters of a second overlapping area overlapping with the infrared image in the visible light image; the image parameters include gradients and/or gray-scale variances;
when the set scene is determined to be a foggy scene according to the comparison result of the image parameters of the first overlapping area and the second overlapping area, carrying out defogging treatment on the visible light image according to an atmospheric scattering model to obtain a defogged image;
and displaying the defogged image.
2. The method of claim 1, wherein determining that the set scene is a foggy day scene based on the comparison of the image parameters of the first overlap region and the second overlap region comprises:
the image parameter is a gradient, and when a first gradient of the first overlapping area is larger than a second gradient of the second overlapping area, the set scene is determined to be a foggy day scene; alternatively, the first and second liquid crystal display panels may be,
the image parameter is a gray variance, and when a first gray variance of the first overlapping area is larger than a second gray variance of the second overlapping area, the set scene is determined to be a foggy day scene; alternatively, the first and second liquid crystal display panels may be,
the image parameters include a gradient and a gray variance, and when the first gradient of the first overlapping area is greater than the second gradient of the second overlapping area and the first gray variance of the first overlapping area is greater than the second gray variance of the second overlapping area, it is determined that the set scene is a foggy day scene.
3. The method of claim 2, wherein the method further comprises:
obtaining the first gradient and the second gradient by:
calculating a gradient of each pixel in the first overlapping region by a laplacian operator to obtain the first gradient, and calculating a gradient of each pixel in the second overlapping region by a laplacian operator to obtain the second gradient; alternatively, the first and second liquid crystal display panels may be,
dividing the first overlapping area into M sub-image blocks, calculating a gradient of each pixel in each of the M sub-image blocks by using a laplacian operator to obtain the first gradient, dividing the second overlapping area into N sub-image blocks, and calculating a gradient of each pixel in each of the N sub-image blocks by using a laplacian operator to obtain the second gradient.
4. The method of claim 3, wherein said calculating a gradient for each pixel in each of the M sub-image blocks by the Laplacian operator to obtain the first gradient comprises:
calculating a gradient of each pixel in each sub-image block included in the first overlapping area by using a laplacian operator;
taking the sum of the gradients of each pixel in each sub image block as the gradient of each sub image block;
weighting the gradient of each subimage block to obtain the first gradient;
the calculating, by the laplacian operator, the gradient of each pixel in each of the N sub-image blocks to obtain the second gradient includes:
calculating the gradient of each pixel in each sub image block included in the second overlapped area through a Laplace operator;
taking the sum of the gradients of each pixel in each sub image block as the gradient of each sub image block;
and weighting the gradient of each sub image block to obtain the second gradient.
5. The method of claim 2, wherein the method further comprises:
determining the first and second gray variances by:
determining a gray mean value of the first overlapping area, and determining the first gray variance according to the gray value of each pixel in the first overlapping area and the gray mean value of the first overlapping area;
determining a gray mean value of the second overlapping area, and determining the second gray variance according to the gray value of each pixel in the second overlapping area and the gray mean value of the second overlapping area;
alternatively, the first and second electrodes may be,
dividing the first overlapping area into M sub-image blocks, determining a gray mean value of each sub-image block in the M sub-image blocks, determining gray variances respectively corresponding to the M sub-image blocks according to a gray value of each pixel in each sub-image block and the gray mean value of each sub-image block, and weighting the gray variances respectively corresponding to the M sub-image blocks to obtain a first gray variance;
dividing the second overlapping area into N sub-image blocks, determining a gray mean value of each sub-image block in the N sub-image blocks, determining gray variances respectively corresponding to the N sub-image blocks according to the gray value of each pixel in each sub-image block and the gray mean value of each sub-image block, and weighting the gray variances respectively corresponding to the N sub-image blocks to obtain the second gray variance.
6. The method of any one of claims 1-5, wherein the defogging the visible light image according to the atmospheric scattering model to obtain the defogged image comprises:
determining the minimum value of the three color channels of each pixel in the visible light image to obtain a minimum value image, and determining the maximum value of the three color channels of each pixel in the visible light image to obtain a maximum value image;
determining a global atmospheric optical parameter according to the minimum value image and the maximum value image; the global atmospheric optical parameter is an average value of a first maximum value in a filtered image obtained after the minimum value image is filtered and a second maximum value in the maximum value image;
determining an ambient light parameter according to the configured defogging adjustment parameter and the minimum value image;
and carrying out defogging treatment on the visible light image based on the atmospheric scattering model according to the global atmospheric optical parameter and the environmental optical parameter to obtain a defogged image.
7. The method of claim 6, wherein said determining an ambient light parameter from said defogging adjustment parameter and said minimum value image comprises:
determining an ambient light parameter by the following formula:
H(x)=min(αM b (x),M(x));
wherein H (x) represents an ambient light parameter corresponding to the pixel x, α represents a luminance parameter, α = min (ρ M) b mean Q), where p represents the defogging adjustment parameter, M b mean Represents the mean value of all pixels of the minimum value image filtered image, said M b (x) Represents a pixel value of a pixel x in the filtered image after the minimum value image filtering, wherein M (x) represents a pixel value of a pixel x in the minimum value image, and Q is less than 1 and greater than 0.5.
8. An image acquisition device for foggy day scenes, comprising:
the acquisition module is used for responding to the image acquisition operation of a user of the terminal equipment and respectively acquiring an infrared image and a visible light image which are obtained by synchronously acquiring a set scene by the infrared camera and the visible light camera;
acquiring image parameters of a first overlapping area in the infrared image, which is overlapped with the visible light image, and acquiring image parameters of a second overlapping area in the visible light image, which is overlapped with the infrared image; the image parameters include gradients and/or gray-scale variances;
the processing module is used for performing defogging processing on the visible light image according to an atmospheric scattering model to obtain a defogged image when the set scene is determined to be a foggy scene according to the comparison result of the image parameters of the first overlapping area and the image parameters of the second overlapping area;
and the display module is used for displaying the defogged image.
9. A terminal device, comprising:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory and for executing the steps comprised by the method of any one of claims 1 to 7 in accordance with the obtained program instructions.
10. A computer-readable storage medium having stored therein instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1-7.
CN202211081681.XA 2022-09-06 2022-09-06 Image acquisition method and device for foggy scene Pending CN115456895A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211081681.XA CN115456895A (en) 2022-09-06 2022-09-06 Image acquisition method and device for foggy scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211081681.XA CN115456895A (en) 2022-09-06 2022-09-06 Image acquisition method and device for foggy scene

Publications (1)

Publication Number Publication Date
CN115456895A true CN115456895A (en) 2022-12-09

Family

ID=84302794

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211081681.XA Pending CN115456895A (en) 2022-09-06 2022-09-06 Image acquisition method and device for foggy scene

Country Status (1)

Country Link
CN (1) CN115456895A (en)

Similar Documents

Publication Publication Date Title
WO2021008456A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN115866121B (en) Application interface interaction method, electronic device and computer readable storage medium
CN109815150B (en) Application testing method and device, electronic equipment and storage medium
CN111050076B (en) Shooting processing method and electronic equipment
CN110490179B (en) License plate recognition method and device and storage medium
CN108989678B (en) Image processing method and mobile terminal
CN111343339B (en) Mobile terminal and image display method thereof
CN108132790B (en) Method, apparatus and computer storage medium for detecting a garbage code
CN112749613B (en) Video data processing method, device, computer equipment and storage medium
CN106254807B (en) Electronic device and method for extracting still image
CN111738365B (en) Image classification model training method and device, computer equipment and storage medium
CN111246102A (en) Shooting method, shooting device, electronic equipment and storage medium
CN113810588B (en) Image synthesis method, terminal and storage medium
US11102397B2 (en) Method for capturing images, terminal, and storage medium
CN111754386A (en) Image area shielding method, device, equipment and storage medium
CN111586279B (en) Method, device and equipment for determining shooting state and storage medium
CN115115679A (en) Image registration method and related equipment
CN113038141A (en) Video frame processing method and electronic equipment
CN111050081B (en) Shooting method and electronic equipment
CN108141544B (en) Face detection method and electronic device supporting the same
CN114863432A (en) Terminal device, contrast adjusting method, device and medium
CN115456895A (en) Image acquisition method and device for foggy scene
CN114489429A (en) Terminal device, long screen capture method and storage medium
CN113407774A (en) Cover determining method and device, computer equipment and storage medium
CN112489006A (en) Image processing method, image processing device, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination