CN107277299B - Image processing method, image processing device, mobile terminal and computer readable storage medium - Google Patents

Image processing method, image processing device, mobile terminal and computer readable storage medium Download PDF

Info

Publication number
CN107277299B
CN107277299B CN201710626257.1A CN201710626257A CN107277299B CN 107277299 B CN107277299 B CN 107277299B CN 201710626257 A CN201710626257 A CN 201710626257A CN 107277299 B CN107277299 B CN 107277299B
Authority
CN
China
Prior art keywords
image
defogging
depth
field information
visibility
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710626257.1A
Other languages
Chinese (zh)
Other versions
CN107277299A (en
Inventor
袁全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710626257.1A priority Critical patent/CN107277299B/en
Publication of CN107277299A publication Critical patent/CN107277299A/en
Application granted granted Critical
Publication of CN107277299B publication Critical patent/CN107277299B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo

Abstract

The embodiment of the application relates to an image processing method, an image processing device, a mobile terminal and a computer readable storage medium. The method comprises the following steps: acquiring an image through a first camera, and acquiring first depth-of-field information of the image through a second camera; determining the fog concentration of the image according to the first depth of field information; calculating visibility according to the fog depth; and when the visibility is smaller than a preset threshold value, calculating a defogging parameter according to the fog concentration, and defogging the image according to the defogging parameter. The image processing method, the image processing device, the mobile terminal and the computer readable storage medium can effectively remove fog in the collected image, so that the video image of driving recorded in foggy weather is clearer.

Description

Image processing method, image processing device, mobile terminal and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, a mobile terminal, and a computer-readable storage medium.
Background
During driving, a driver usually records video images and sounds of the whole driving process of the automobile through a mobile terminal, such as a smart phone, a car recorder, and the like. In foggy weather, the imaging device on the mobile terminal is affected by suspended particles in the air, so that the characteristics of the collected images, such as color, texture and the like, are seriously weakened, the integral tone of the collected images tends to be grayed, and the video images recorded by the mobile terminal and used for driving the automobile are unclear.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a mobile terminal and a computer readable storage medium, which can effectively remove fog in a collected image and enable a video image recorded in foggy weather when an automobile runs to be clearer.
An image processing method comprising:
acquiring an image through a first camera, and acquiring first depth-of-field information of the image through a second camera;
determining the fog concentration of the image according to the first depth of field information;
calculating visibility according to the fog depth;
and when the visibility is smaller than a preset threshold value, calculating a defogging parameter according to the fog concentration, and defogging the image according to the defogging parameter.
In one embodiment, the determining the fog concentration of the image according to the first depth of field information includes:
obtaining a dark channel image of the image;
performing area division on the dark channel image according to the first depth of field information;
and performing smooth filtering processing on each divided region, and determining the fog concentration of the image according to the filtered dark channel image.
In one embodiment, the defogging processing on the image according to the defogging parameters includes:
detecting whether the image contains a moving object;
and if so, extracting a moving object region in the image, and performing defogging processing on the moving object region.
In one embodiment, the defogging processing on the moving object region includes:
acquiring second depth-of-field information of the moving object region through the second camera;
selecting a correction factor matched with the second depth of field information;
and adjusting the defogging parameters according to the correction factor, and performing defogging treatment on the moving object region according to the adjusted defogging parameters.
In one embodiment, the method further comprises:
determining a visibility range interval in which the visibility falls;
acquiring a line-of-sight distance matched with the visibility range interval;
and acquiring the current moving speed, and if the moving speed is greater than a preset standard speed corresponding to the line-of-sight distance, giving out a warning.
An image processing apparatus comprising:
the acquisition module is used for acquiring an image through a first camera and acquiring first depth-of-field information of the image through a second camera;
the first determining module is used for determining the fog concentration of the image according to the first depth information;
the computing module is used for computing the visibility according to the fog depth;
and the defogging module is used for calculating defogging parameters according to the fog concentration and defogging the image according to the defogging parameters when the visibility is smaller than a preset threshold value.
In one embodiment, the first determining module includes:
the obtaining unit is used for obtaining a dark channel image of the image;
the dividing unit is used for carrying out region division on the dark channel image according to the first depth of field information;
and the filtering unit is used for performing smooth filtering processing on each divided region and determining the fog concentration of the image according to the filtered dark channel image.
In one embodiment, the defogging module includes:
the detection unit is used for detecting whether the image contains a moving object;
the defogging unit is used for extracting a moving object region in the image and defogging the moving object region if the image contains a moving object;
the defogging unit comprises:
the acquisition subunit is used for acquiring second depth-of-field information of the moving object region through the second camera;
a selecting subunit, configured to select a correction factor matched with the second depth-of-field information;
and the adjusting subunit is used for adjusting the defogging parameters according to the correction factors and performing defogging processing on the moving object region according to the adjusted defogging parameters.
In one embodiment, the apparatus further comprises:
the second determining module is used for determining a visibility range interval in which the visibility falls;
the distance acquisition module is used for acquiring the line-of-sight distance matched with the visibility range interval;
and the warning module is used for acquiring the current moving speed, and if the moving speed is greater than a preset standard speed corresponding to the line-of-sight distance, warning is sent.
A mobile terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method as described above when executing the program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method as set forth above.
According to the image processing method, the image processing device, the mobile terminal and the computer readable storage medium, the first camera is used for collecting the image, the second camera is used for obtaining the first depth of field information of the image, the fog concentration of the image is determined according to the first depth of field information, the visibility is calculated, when the visibility is smaller than a preset threshold value, the defogging parameter is calculated according to the fog concentration and defogging is carried out, the collected image can be defogged to different degrees according to different fog concentrations, the fog in the collected image can be effectively removed, and the video image of driving recorded in foggy weather is clearer.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of an application of an image processing method;
FIG. 2 is a block diagram of a mobile terminal in one embodiment;
FIG. 3 is a flow diagram illustrating a method for image processing according to one embodiment;
FIG. 4 is a schematic flow chart illustrating the process of determining the fog density of an image in one embodiment;
FIG. 5 is a flow diagram illustrating an embodiment of alerting for a current moving speed;
FIG. 6 is a block diagram of an image processing apparatus in one embodiment;
FIG. 7 is a block diagram of a first determination module in one embodiment;
FIG. 8 is a block diagram of an image processing apparatus in another embodiment;
FIG. 9 is a schematic diagram of an image processing circuit in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first client may be referred to as a second client, and similarly, a second client may be referred to as a first client, without departing from the scope of the present application. Both the first client and the second client are clients, but they are not the same client.
Fig. 1 is a diagram illustrating an application scenario of an image processing method according to an embodiment. As shown in fig. 1, the mobile terminal 10 may capture and record a front scene 20 of a vehicle when the vehicle is running through a camera 102, where the camera 102 may include a first camera and a second camera. The mobile terminal 10 acquires an image of the scene 20 in front of the vehicle through the first camera, and acquires first depth-of-field information of the acquired image through the second camera. The mobile terminal 10 may determine the fog concentration of the collected image according to the first depth of field information, and calculate the visibility of the scene 20 in front of the vehicle according to the fog concentration. When the visibility of the scene 20 in front of the vehicle is smaller than the preset threshold value, the mobile terminal 10 calculates the defogging parameters according to the fog concentration, and performs defogging processing on the image according to the defogging parameters.
Fig. 2 is a block diagram of the mobile terminal 10 in one embodiment. As shown in fig. 2, the mobile terminal 10 includes a processor, a non-volatile storage medium, an internal memory and network interface, a display screen, and an input device, which are connected via a system bus. The non-volatile storage medium of the mobile terminal 10 stores therein an operating system and computer-executable instructions, which are executed by a processor to implement an image processing method provided in the embodiments of the present application. The processor is operative to provide computing and control capabilities that support the overall operation of the mobile terminal 10. Internal memory within the mobile terminal 10 provides an environment for the execution of computer-readable instructions in a non-volatile storage medium. The network interface is used for network communication with the server. The display screen of the mobile terminal 10 may be a liquid crystal display screen or an electronic ink display screen, and the input device may be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on a housing of the mobile terminal 10, or an external keyboard, a touch pad or a mouse. The mobile terminal 10 may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc. Those skilled in the art will appreciate that the configuration shown in fig. 2 is a block diagram of only a portion of the configuration associated with the present application and does not constitute a limitation of the mobile terminal 10 to which the present application applies, as a particular mobile terminal 10 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
As shown in fig. 3, in one embodiment, there is provided an image processing method including the steps of:
step 310, acquiring an image through a first camera, and acquiring first depth-of-field information of the image through a second camera.
The mobile terminal can be provided with two cameras at the back, including first camera and second camera, first camera and second camera can set up on same water flat line, and the level is arranged about, also can set up on same vertical line, and vertical range from top to bottom. In this embodiment, the first camera and the second camera may be cameras of different pixels, wherein the first camera may be a camera with a higher pixel and is mainly used for imaging, and the second camera may be an auxiliary depth-of-field camera with a lower pixel and is used for acquiring depth-of-field information of the acquired image. The mobile terminal can continuously collect images of a plurality of frames of images of a vehicle front scene through the first camera to generate a video for recording driving, and the camera acquires first depth-of-field information of each collected frame of image, wherein the first depth-of-field information can be understood as the distance from each object in the images to the mobile terminal, namely object distance information. The mobile terminal can acquire the first depth of field information of each pixel point in the acquired image through the second camera, further, the average depth of field information of each pixel point in the image can be calculated, and the average depth of field information is used as the first depth of field information of the image.
And step 320, determining the fog concentration of the image according to the first depth of field information.
The mobile terminal can estimate the fog concentration of the image according to the first depth of field information of each pixel point of the collected image, and the depth of field information and the fog concentration can satisfy the relation shown in the formula (1):
F(x)=1-e-βd(x)(1);
wherein, F (x) represents fog concentration, x represents the space position of a certain pixel point in the image, beta represents the scattering coefficient of the atmosphere, d (x) represents depth of field, the fog concentration F (x) and the depth of field d (x) are in exponential relation, and the fog concentration F (x) increases exponentially with the increase of the depth of field d (x). The mobile terminal can detect whether the image has the pixel points with the abrupt change of the depth of field information, so that whether the fog concentration of the image is uniformly distributed or not is judged. And if the difference value is larger than a first numerical value, the two pixel points of which the difference value is larger than the first numerical value can be judged to have field depth information mutation, which indicates that the fog concentration distribution in the image is not uniform. If the fog concentration distribution of the image is uniform, the mobile terminal can directly determine the fog concentration of the image, if the fog concentration distribution of the image is non-uniform, the image can be divided into areas according to the first depth of field information of each pixel point, the fog concentration of each area is respectively determined, and the average value of the fog concentration of each area can be determined as the fog concentration of the image.
And step 330, calculating the visibility according to the fog depth.
Visibility, which is an indicator of atmospheric transparency, is generally defined as the maximum distance that a person with normal vision can see clearly the outline of an object under the current weather conditions. Visibility is closely related to the current weather conditions, and when rainfall, fog, haze, sand storm and other weather conditions occur, the atmosphere transparency is low, so the visibility is poor. The mobile terminal can calculate the visibility of the current scene in front of the vehicle according to the determined fog concentration, and can establish a relational expression (2) of atmospheric scattering coefficient and visibility information:
Figure BDA0001362886590000061
wherein, L represents visibility information, then according to equation (1) and equation (2), can establish the relation that fog concentration and visibility are shown as equation (3):
Figure BDA0001362886590000062
the mobile terminal can substitute the first depth of field information and the fog concentration of the image into the formula (3), and the visibility of the current scene in front of the vehicle can be calculated.
And 340, when the visibility is smaller than a preset threshold value, calculating a defogging parameter according to the fog concentration, and defogging the image according to the defogging parameter.
After the mobile terminal obtains the visibility of the current scene in front of the vehicle, whether the visibility is smaller than a preset threshold value or not can be judged, if the visibility is not smaller than the preset threshold value, the scene in front of the vehicle can be considered to be a fog-free scene, the image collected by the first camera is a clear image, and the collected image does not need to be subjected to defogging treatment, wherein the first threshold value can be set according to actual requirements, such as 1000m (meters), 1200m and the like. If the visibility is smaller than the preset threshold value, the image can be subjected to defogging processing, whether the image is subjected to defogging processing is determined according to the visibility of the current scene in front of the vehicle, the calculation processing amount can be reduced, the power consumption of the mobile terminal is reduced, and the pressure of a central processing unit is relieved.
The mobile terminal can perform defogging processing on the acquired image according to a defogging algorithm, wherein the defogging algorithm can comprise a defogging algorithm based on image enhancement and a defogging algorithm based on image restoration, the defogging algorithm based on image enhancement can comprise a defogging algorithm based on a RetineX theory, a defogging algorithm based on histogram equalization and the like, and the defogging algorithm based on image restoration can comprise a defogging algorithm based on an atmospheric scattering model and the like. In this embodiment, the mobile terminal may perform defogging processing on the image containing fog through a dark primary color prior algorithm, where the dark primary color prior algorithm belongs to a defogging algorithm based on image restoration. The dark channel prior algorithm adopts an atmospheric scattering model to describe the fog-containing image, and the atmospheric scattering model can be shown as formula (4):
I(x)=J(x)t(x)+A(1-t(x)) (4);
wherein, i (x) represents a fog-containing image which needs to be subjected to defogging treatment, j (x) represents a fog-free image obtained after the fog-containing image is subjected to defogging treatment, t (x) represents transmittance, and a represents an atmospheric light value. In the present embodiment, the defogging parameters may include an atmospheric light value, a transmittance, and the like of the image.
In the dark channel prior algorithm, for fog-free images, there will always be at least one color channel with very low value in three channels of RGB (red, green, blue color space) for some pixels, the value of which is close to zero. Thus, for any image, its dark channel image can be as shown in equation (5):
Figure BDA0001362886590000071
wherein, Jdark(x) Representing dark channel images, Jc(y) represents the value of the color channel and Ω (x) represents a window centered on pixel x.
The mobile terminal can obtain a dark channel image of the collected image according to the formula (5), and an atmospheric light value of the image is obtained according to the dark channel image. Further, the mobile terminal may obtain the brightness of each pixel point in the dark channel image of the collected image, sort the pixel points according to the brightness, and extract the pixel points in the dark channel image in a preset proportion according to the brightness from large to small, where the preset proportion may be set according to actual requirements, for example, 0.1%, 0.2%, and the like, and extract the pixel points in the dark channel image in the first 0.1% or 0.2% of the maximum brightness. And then determining the brightness value of the position corresponding to the extracted pixel point in the collected image, and taking the brightness value of the pixel point with the highest brightness value as an atmospheric light value.
The mobile terminal may calculate the transmittance of the image according to the fog concentration, and in one embodiment, the relation between the fog concentration and the transmittance may be as shown in equation (6):
F(x)=1-t(x) (6);
according to the formula (6), the transmittance of the image can be calculated, the transmittance and the atmospheric light value are substituted into the formula (4), and the acquired image is taken as I (x), so that the image after defogging treatment can be obtained. Furthermore, the mobile terminal can directly process the images of the preset number acquired subsequently according to the transmittance and the atmospheric light value obtained by the calculation, and the transmittance and the atmospheric light value do not need to be recalculated for each frame of image, so that the defogging processing speed of the video is accelerated, and the pressure of a central processing unit is reduced. For example, the mobile terminal may calculate the defogging parameters of the image every 100 frames of images, and perform defogging processing on the subsequently acquired 99 frames of images according to the defogging parameters such as the transmittance and the atmospheric light value calculated by the current image.
According to the image processing method, the image is collected through the first camera, the first depth of field information of the image is obtained through the second camera, the fog concentration of the image is determined according to the first depth of field information, the visibility is calculated, when the visibility is smaller than a preset threshold value, the defogging parameter is calculated according to the fog concentration and defogging is carried out, the collected image can be subjected to defogging processing of different degrees according to different fog concentrations, the fog in the collected image can be effectively removed, and the video image of driving recorded in foggy weather is clearer.
As shown in fig. 4, in one embodiment, the step 320 of determining the fog concentration of the image according to the first depth of field information includes the following steps:
step 402, a dark channel image of the image is obtained.
The mobile terminal can obtain a dark channel image of the collected image according to the formula (5), and estimate the fog concentration of the image according to the dark channel image, wherein the dark channel image is the minimum value of each pixel point in the collected image in R, G, B three channels and can be used as an estimation image of the fog concentration of the image.
And step 404, performing area division on the dark channel image according to the first depth of field information.
The mobile terminal can detect whether the image has the pixel points with the abrupt change of the depth of field information, so that whether the fog concentration of the image is uniformly distributed or not is judged. After the first depth of field information of each pixel point in the image is acquired through the second camera, the image can be dividedRespectively calculating first depth-of-field information d (x) of two adjacent pixel points in the image1) And d (x)2) If the difference is greater than the first value, it can be determined that two pixel points of which the difference is greater than the first value have field depth information mutation, which indicates that a foreground and a background exist in the image, and the fog concentration of the foreground and the background are not uniformly distributed and large jump may exist. For example, the acquired image includes trees and tall buildings, where the trees are foreground and the tall buildings are background, the trees include details such as leaves and branches, and the difference between the fog concentration of the trees and the tall buildings is large when the depth of field information from the leaves and the branches to the tall buildings changes greatly.
If the fog concentration distribution of the image is not uniform, the dark channel image may be divided into regions according to the first depth of field information of each pixel point in the image, and the pixel points with the similar first depth of field information are divided into the same region, for example, the dark channel image may be divided into a foreground region and a background region according to the first depth of field information, but not limited thereto, and may also be divided into three or four regions, and the like, which is mainly determined by the first depth of field information of each pixel point.
And step 406, performing smooth filtering processing on each divided region, and determining the fog concentration of the image according to the filtered dark channel image.
The mobile terminal may perform smoothing filtering on each divided region in the dark channel image, and in one embodiment, may perform median filtering on each region, so as to retain boundary information in the dark channel image, where abrupt change of depth of field information occurs. The median filtering is a nonlinear smoothing technology, and can replace the value of any pixel point in the area with the median of the values of all pixel points in the window where the pixel point is located, so that the values around the pixel point are close to the true values.
The mobile terminal may define a window of a preset size in the dark channel image, where the preset size may be N × N, where N is an odd number, for example, N may be 41, 61, and the like, but is not limited thereto. According to a window with a preset size, performing median filtering processing on each area in the dark channel image, extracting the value of each pixel point in the window in the area, arranging the extracted values of each pixel point according to the size sequence, generating a sequence, and acquiring a value positioned in the middle of the sequence, namely the median of the values of each pixel point contained in the window. The mobile terminal can replace the value of the pixel point positioned in the center of the window with the defined preset size with the calculated median value of the window, and the median filtering is completed. The median filtering process can be performed repeatedly for a plurality of times by defining windows at different positions in the same region. The mobile terminal may estimate the fog density of each region from the dark channel image on which the median filtering is performed, and may determine an average value of the fog densities of each region as the fog density of the image.
In this embodiment, the dark channel image of the acquired image may be subjected to region division according to the first depth of field information, each divided region may be subjected to smoothing filtering, and the fog concentration of the image may be determined, so that the estimated fog concentration may be more accurate, the defogging effect may be effectively improved, and the defogged image may be clearer.
In one embodiment, the step of defogging the image according to the defogging parameters comprises: and detecting whether the image contains a moving object, if so, extracting a moving object region in the image, and performing defogging processing on the moving object region.
The mobile terminal collects images of a plurality of frames of front scenes of the vehicle through the first camera, and acquires first depth-of-field information of each pixel point in each frame of image through the second camera. And calculating the difference between the first depth of field information of the pixel points at the corresponding positions in the two adjacent frames of images, and judging that the image contains a moving object when the difference between the first depth of field information of the previous frame of image and the pixel point with the second value is greater than the difference between the first depth of field information of the previous frame of image. The mobile terminal can extract all pixel points of which the difference value between the first depth of field information of the image and the first frame of image is larger than a second numerical value, and the extracted pixel points are used as a moving object region to preferentially carry out defogging processing on the moving object region.
In one embodiment, the step of defogging the moving object region includes the following steps:
(1) and acquiring second depth-of-field information of the moving object region through a second camera.
The mobile terminal can acquire the first depth of field information of each pixel point in the moving object region through the second camera, calculate the average depth of field information of all the pixel points in the moving object region, and use the average depth of field information as the second depth of field information of the moving object region.
(2) And selecting a correction factor matched with the second depth of field information.
When the collected image contains the moving object, the moving object region in the image can be extracted, and the defogging treatment with stronger degree can be carried out on the moving object region, so that the clarity of the moving object in the collected image is ensured. The correction factor can be introduced to adjust the calculated defogging parameters such as the transmittance of the image and the like, the corresponding relation between the correction factor and the depth of field information range is established, different depth of field ranges can correspond to different correction factors, wherein the correction factor can be a value which is larger than 0 and smaller than or equal to 1, and when the depth of field range is larger, the corresponding correction factor is smaller, and when the depth of field range is smaller, the corresponding correction factor is larger. After the mobile terminal acquires the second depth of field information of the moving object region in the image, the depth of field range in which the second depth of field information falls can be determined, and a correction factor matched with the falling depth of field range is selected.
(3) And adjusting the defogging parameters according to the correction factor, and performing defogging treatment on the moving object region according to the adjusted defogging parameters.
The mobile terminal may adjust the transmittance with a correction factor according to equation (7):
t'=W*t (7);
where W denotes a correction factor, t denotes the transmittance of the image, and t' denotes the adjusted transmittance. When the second depth of field information of the moving object region is larger, the fact that the moving object is farther from the mobile terminal is indicated, the selected correction factor is smaller, the adjusted transmittance is smaller, and the defogging degree is stronger; when the second depth of field information of the moving object region is smaller, the closer the moving object is to the mobile terminal is shown, the larger the selected correction factor is, the larger the transmittance after adjustment is, and the lower the defogging degree is.
In one embodiment, the mobile terminal may also perform defogging only on the moving object region in the image when detecting that the moving object is included in the image, and does not perform defogging on the image when the moving object is not included in the image, so that the operation of defogging processing can be reduced, the resource consumption can be saved, and the pressure of the central processing unit can be reduced.
In this embodiment, when it is detected that the image includes the moving object, the transmittance may be adjusted by selecting the correction factor according to the second depth of field information of the moving object region, so that the moving object region in the image is clearer and the defogging effect is better.
As shown in fig. 5, in an embodiment, the image processing method further includes the following steps:
step 502, determining a visibility range interval in which visibility falls.
A plurality of visibility range intervals can be pre-divided, for example, the divided visibility range intervals can include the visibility range intervals smaller than 100m, 100-200 m, 200-500 m, 500-1000 m, 1000-2000 m, and the like.
And step 504, acquiring the line-of-sight distance matched with the visibility range interval.
Aiming at each visibility range interval, different line-of-sight distances can be respectively corresponded, wherein the line-of-sight distances refer to the visible distances of vehicles running in the corresponding visibility range intervals, and further, the line-of-sight distances and the visibility information can have a linear regression relationship. For example, the visibility range is less than 100m, 100-200 m, 200-500 m, 500-1000 m and 1000-2000 m, and the corresponding line-of-sight distances are less than 20m, 20-50 m, 50-150 m, 150-250 m and 250-520 m, respectively. After calculating the visibility of the current scene in front of the vehicle, the mobile terminal can determine the visibility range interval in which the visibility falls and acquire the line-of-sight distance corresponding to the visibility range interval in which the visibility falls.
Step 506, obtaining the current moving speed, and if the moving speed is greater than a preset standard speed corresponding to the line-of-sight distance, sending an alarm.
The corresponding standard speed can be preset in different line-of-sight distances, the standard speed refers to the recommended safe driving speed under the corresponding line-of-sight distance, when the driving speed does not exceed the standard speed, the driving safety of a user can be guaranteed, when the line-of-sight distance is smaller, the standard speed is smaller, and when the line-of-sight distance is larger, the standard speed is larger. For example, when the visibility information is less than 100m, the line-of-sight distance is less than 20m, and the corresponding standard speed is 40 km/h, and when the visibility information is between 100m and 200m, the line-of-sight distance is between 20m and 50m, and the corresponding standard speed is 60 km/h. The mobile terminal may obtain a current moving speed through a sensor such as a gyroscope, and compare the moving speed with a standard speed corresponding to the obtained line-of-sight distance, and if the current moving speed is greater than the standard speed, may issue an alert to remind the user, where the alert may include one or more of issuing an alert sound, lighting a red light on a screen, and popping a frame for prompting, but is not limited thereto.
In the embodiment, the moving speed of the vehicle can be reminded according to the current visibility information, and the driving safety of a user in a foggy day is ensured.
As shown in fig. 6, in one embodiment, an image processing apparatus 600 is provided and includes an acquisition module 610, a first determination module 620, a calculation module 630, and a defogging module 640.
The acquisition module 610 is configured to acquire an image through a first camera, and acquire first depth-of-field information of the image through a second camera.
And a first determining module 620, configured to determine a fog density of the image according to the first depth information.
And the calculating module 630 is used for calculating the visibility according to the fog depth.
And the defogging module 640 is used for calculating defogging parameters according to the fog concentration when the visibility is smaller than a preset threshold value, and performing defogging processing on the image according to the defogging parameters.
Above-mentioned image processing apparatus gathers the image through first camera to acquire the first depth of field information of image through the second camera, confirm the fog concentration of image according to first depth of field information, and calculate visibility, when visibility is less than and predetermines the threshold value, calculate the defogging parameter and carry out the defogging according to fog concentration, can carry out the defogging processing of different degrees to the image of gathering according to different fog concentrations, can effectively get rid of the fog in the image of gathering, make the video image of driving in the recording of foggy weather clearer.
As shown in fig. 7, in one embodiment, the first determining module 620 includes an obtaining unit 622, a dividing unit 624 and a filtering unit 626.
And an obtaining unit 622 for obtaining the dark channel image of the image.
The dividing unit 624 is configured to perform area division on the dark channel image according to the first depth information.
A filtering unit 626, configured to perform smoothing filtering processing on each divided region, and determine a fog density of the image according to the filtered dark channel image.
In this embodiment, the dark channel image of the acquired image may be subjected to region division according to the first depth of field information, each divided region may be subjected to smoothing filtering, and the fog concentration of the image may be determined, so that the estimated fog concentration may be more accurate, the defogging effect may be effectively improved, and the defogged image may be clearer.
As shown in fig. 8, in one embodiment, the defogging module 640 includes a detection unit and a defogging unit.
And the detection unit is used for detecting whether the image contains a moving object.
And the defogging unit is used for extracting a moving object region in the image and performing defogging processing on the moving object region if the image contains a moving object.
In one embodiment, the defogging unit comprises an acquiring subunit, a selecting subunit and an adjusting subunit.
And the acquisition subunit is used for acquiring second depth-of-field information of the moving object region through the second camera.
And the selecting subunit is used for selecting the correction factor matched with the second depth of field information.
And the adjusting subunit is used for adjusting the defogging parameters according to the correction factors and performing defogging treatment on the moving object region according to the adjusted defogging parameters.
In this embodiment, when it is detected that the image includes the moving object, the transmittance may be adjusted by selecting the correction factor according to the second depth of field information of the moving object region, so that the moving object region in the image is clearer and the defogging effect is better.
As shown in fig. 8, in an embodiment, the image processing apparatus 600 further includes a second determining module 650, a distance acquiring module 660, and an alarming module 670, in addition to the acquiring module 610, the first determining module 620, the calculating module 630, and the defogging module 640.
A second determining module 650, configured to determine a visibility range interval in which visibility falls.
And a distance obtaining module 660, configured to obtain a line-of-sight distance matching the visibility range interval.
The warning module 670 is configured to obtain a current moving speed, and send a warning if the moving speed is greater than a preset standard speed corresponding to the line-of-sight distance.
In the embodiment, the moving speed of the vehicle can be reminded according to the current visibility information, and the driving safety of a user in a foggy day is ensured.
The division of the modules in the image processing apparatus is merely for illustration, and in other embodiments, the recommendation information generation apparatus may be divided into different modules as needed to complete all or part of the functions of the recommendation information generation apparatus.
The embodiment of the application also provides the mobile terminal. The mobile terminal includes an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 9 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 9, for convenience of explanation, only aspects of the image processing technique related to the embodiments of the present application are shown.
As shown in fig. 9, the image processing circuit includes an ISP processor 940 and a control logic 950. The image data captured by the imaging device 910 is first processed by the ISP processor 940, and the ISP processor 940 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of the imaging device 910. The imaging device 910 may include a camera having one or more lenses 912 and an image sensor 914. Image sensor 914 may include an array of color filters (e.g., Bayer filters), and image sensor 914 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 914 and provide a set of raw image data that may be processed by ISP processor 940. The sensor 920 may provide the raw image data to the ISP processor 940 based on the type of interface of the sensor 920. The sensor 920 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
The ISP processor 940 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 940 may perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 940 may also receive pixel data from image memory 930. For example, raw pixel data is sent from the sensor 920 interface to the image memory 930, and the raw pixel data in the image memory 930 is then provided to the ISP processor 940 for processing. The image Memory 930 may be a part of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the sensor 920 interface or from the image memory 930, the ISP processor 940 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 930 for additional processing before being displayed. ISP processor 940 may also receive processed data from image memory 930 for image data processing in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display 980 for viewing by a user and/or further processing by a graphics engine or GPU (graphics processing Unit). Further, the output of ISP processor 940 may also be sent to image memory 930 and display 980 may read image data from image memory 930. In one embodiment, image memory 930 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 940 may be transmitted to an encoder/decoder 970 for encoding/decoding image data. The encoded image data may be saved and decompressed prior to display on a display 980 device.
The step of the ISP processor 940 processing the image data includes: the image data is subjected to VFE (Video Front End) Processing and CPP (Camera Post Processing). The VFE processing of the image data may include modifying the contrast or brightness of the image data, modifying digitally recorded lighting status data, performing compensation processing (e.g., white balance, automatic gain control, gamma correction, etc.) on the image data, performing filter processing on the image data, etc. CPP processing of image data may include scaling an image, providing a preview frame and a record frame to each path. Among other things, the CPP may use different codecs to process the preview and record frames.
The image data processed by the ISP processor 940 may be transmitted to a defogging module 960 to defogge the image before being displayed. The defogging module 960 can determine a fog concentration of the image according to the first depth of field information of the image, calculate visibility according to the fog concentration, calculate a defogging parameter according to the fog concentration when the visibility is smaller than a preset threshold value, and perform defogging processing and the like on the image according to the defogging parameter. The defogging module 960 may be a Central Processing Unit (CPU), a GPU, a coprocessor, or the like. After the defogging module 960 defogges the image data, the defogged image data may be transmitted to the encoder/decoder 970 to encode/decode the image data. The encoded image data may be saved and decompressed prior to display on a display 980 device. It is understood that the image data processed by the defogging module 960 may be sent directly to the display 980 for display without passing through the encoder/decoder 970. The image data processed by the ISP processor 940 may also be processed by the encoder/decoder 970 and then processed by the defogging module 960. The encoder/decoder can be a CPU, a GPU, a coprocessor or the like in the mobile terminal.
The statistical data determined by the ISP processor 940 may be transmitted to the control logic 950 unit. For example, the statistical data may include image sensor 914 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 912 shading correction, and the like. The control logic 950 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 910 and control parameters of the ISP processor 940 based on the received statistical data. For example, the control parameters may include sensor 920 control parameters (e.g., gain, integration time for exposure control), camera flash control parameters, lens 912 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 912 shading correction parameters.
In the present embodiment, the image processing method described above can be realized by using the image processing technique in fig. 9.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the above-mentioned image processing method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), or the like.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (6)

1. An image processing method, comprising:
acquiring an image through a first camera, and acquiring first depth-of-field information of the image through a second camera;
obtaining a dark channel image of the image; if the difference value between two adjacent pixel points in the dark channel image is larger than a first numerical value, the fog concentration distribution of the dark channel image is not uniform;
judging whether the fog concentration is uniformly distributed or not according to the first depth of field information;
if the dark channel image is not uniform, performing area division on the dark channel image according to the first depth of field information;
carrying out smooth filtering processing on each divided region, determining the fog concentration of the image according to the dark channel image after filtering processing, and taking the average value of the fog concentrations of the regions as the fog concentration of the image;
if not, the fog concentration of the dark channel image is uniformly distributed, and the fog concentration of the image is determined according to the first depth of field information;
calculating visibility according to the fog concentration;
when the visibility is smaller than a preset threshold value, defogging parameters are calculated according to the fog concentration, and
detecting whether the image contains a moving object;
if yes, extracting a moving object region in the image, and performing defogging processing on the moving object region;
acquiring second depth-of-field information of the moving object region through the second camera;
selecting a correction factor matched with the second depth of field information;
and adjusting the defogging parameters according to the correction factor, and performing defogging treatment on the moving object region according to the adjusted defogging parameters.
2. The method of claim 1, further comprising:
determining a visibility range interval in which the visibility falls;
acquiring a line-of-sight distance matched with the visibility range interval;
and acquiring the current moving speed, and if the moving speed is greater than a preset standard speed corresponding to the line-of-sight distance, giving out a warning.
3. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring an image through a first camera and acquiring first depth-of-field information of the image through a second camera;
the first determining module is used for determining the fog concentration of the image according to the first depth information;
the computing module is used for computing visibility according to the fog concentration;
the defogging module is used for calculating defogging parameters according to the fog concentration when the visibility is smaller than a preset threshold value, and performing defogging processing on the image according to the defogging parameters;
the first determining module includes:
the obtaining unit is used for obtaining a dark channel image of the image;
the dividing unit is used for carrying out region division on the dark channel image according to the first depth of field information;
the filtering unit is used for performing smooth filtering processing on each divided region, determining the fog concentration of the image according to the filtered dark channel image, and taking the average value of the fog concentrations of the regions as the fog concentration of the image;
the defogging module comprises:
the detection unit is used for detecting whether the image contains a moving object;
the defogging unit is used for extracting a moving object region in the image and defogging the moving object region if the image contains a moving object;
the acquisition subunit is used for acquiring second depth-of-field information of the moving object region through the second camera;
a selecting subunit, configured to select a correction factor matched with the second depth-of-field information;
and the adjusting subunit is used for adjusting the defogging parameters according to the correction factors and performing defogging processing on the moving object region according to the adjusted defogging parameters.
4. The apparatus of claim 3, further comprising:
the second determining module is used for determining a visibility range interval in which the visibility falls;
the distance acquisition module is used for acquiring the line-of-sight distance matched with the visibility range interval;
and the warning module is used for acquiring the current moving speed, and if the moving speed is greater than a preset standard speed corresponding to the line-of-sight distance, warning is sent.
5. A mobile terminal comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor when executing the program implementing the method according to any of claims 1 to 2.
6. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 2.
CN201710626257.1A 2017-07-27 2017-07-27 Image processing method, image processing device, mobile terminal and computer readable storage medium Active CN107277299B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710626257.1A CN107277299B (en) 2017-07-27 2017-07-27 Image processing method, image processing device, mobile terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710626257.1A CN107277299B (en) 2017-07-27 2017-07-27 Image processing method, image processing device, mobile terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN107277299A CN107277299A (en) 2017-10-20
CN107277299B true CN107277299B (en) 2020-08-18

Family

ID=60074628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710626257.1A Active CN107277299B (en) 2017-07-27 2017-07-27 Image processing method, image processing device, mobile terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN107277299B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730531A (en) * 2017-10-26 2018-02-23 张斌 Moving image layered process system and method
CN107808137A (en) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN110796610A (en) * 2019-09-29 2020-02-14 百度在线网络技术(北京)有限公司 Image defogging method, device and equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103257247A (en) * 2012-02-17 2013-08-21 安徽理工大学 Automobile overspeed alarm system based on visibility meter
CN103747213A (en) * 2014-01-15 2014-04-23 北京工业大学 Traffic monitoring video real-time defogging method based on moving targets
CN105282421A (en) * 2014-07-16 2016-01-27 宇龙计算机通信科技(深圳)有限公司 Defogged image obtaining method, device and terminal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3862613B2 (en) * 2002-06-05 2006-12-27 キヤノン株式会社 Image processing apparatus, image processing method, and computer program
CN104809707B (en) * 2015-04-28 2017-05-31 西南科技大学 A kind of single width Misty Image visibility method of estimation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103257247A (en) * 2012-02-17 2013-08-21 安徽理工大学 Automobile overspeed alarm system based on visibility meter
CN103747213A (en) * 2014-01-15 2014-04-23 北京工业大学 Traffic monitoring video real-time defogging method based on moving targets
CN105282421A (en) * 2014-07-16 2016-01-27 宇龙计算机通信科技(深圳)有限公司 Defogged image obtaining method, device and terminal

Also Published As

Publication number Publication date
CN107277299A (en) 2017-10-20

Similar Documents

Publication Publication Date Title
CN107424198B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107451969B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN110149482B (en) Focusing method, focusing device, electronic equipment and computer readable storage medium
CN107493432B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107424133B (en) Image defogging method and device, computer storage medium and mobile terminal
CN110473185B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109005364B (en) Imaging control method, imaging control device, electronic device, and computer-readable storage medium
CN108111749B (en) Image processing method and device
CN108419028B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN107481186B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN113766125B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN107317967B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN110366048B (en) Video transmission method, video transmission device, electronic equipment and computer-readable storage medium
CN107862658B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN110121031B (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN107277299B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107563979B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN109712177B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107622497B (en) Image cropping method and device, computer readable storage medium and computer equipment
CN107704798B (en) Image blurring method and device, computer readable storage medium and computer device
CN109089046A (en) Image denoising method, device, computer readable storage medium and electronic equipment
CN107454319B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108259754B (en) Image processing method and device, computer readable storage medium and computer device
CN107563329B (en) Image processing method, image processing device, computer-readable storage medium and mobile terminal
CN107454335B (en) Image processing method, image processing device, computer-readable storage medium and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

GR01 Patent grant
GR01 Patent grant