Disclosure of Invention
The technical problem to be solved by the invention is as follows: in the prior art, a surging channel algorithm is only suitable for low visibility conditions, so that the problems of inaccurate and large error in visibility calculation by using the surging channel algorithm are solved.
In order to solve the technical problem, the invention provides a visibility inversion method, which comprises the following steps:
acquiring a target image of visibility to be calculated at the current moment;
obtaining a first pixel position set corresponding to a first target object sub-image set and a second pixel position set corresponding to a second target object sub-image set, wherein the first target object sub-image set and the second target object sub-image set are image regions extracted from a target image based on different image processing algorithms and corresponding to a plurality of targets;
calculating a first standard deviation corresponding to the first pixel position set and a second standard deviation corresponding to the second pixel position set, and respectively inputting the first standard deviation and the second standard deviation into a visibility regression model which is constructed in advance to obtain a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set;
calculating a third visibility sequence of the target image based on a surging channel algorithm;
and determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence.
Optionally, the obtaining a first set of pixel positions corresponding to the first target object sub-graph set and a second set of pixel positions corresponding to the second target sub-graph set includes:
extracting a first target object sub-image set of the target image by using a Canny edge detection operator to obtain a first pixel position set; and the number of the first and second groups,
and extracting a second target object sub-image set of the target image based on the gray value of the gray image to obtain a second pixel position set.
Optionally, the step of constructing the visibility regression model includes:
acquiring a reference image with clear visibility;
selecting a plurality of target objects from the reference image, wherein the plurality of target objects comprise short-distance target objects and long-distance target objects;
removing the target objects of which the pixel values of the image edges are smaller than a first set threshold value from the plurality of target objects;
acquiring pixel positions of a plurality of rejected target objects and an actual distance between each target object and a camera;
adding white noise to the reference image to simulate fogging, and obtaining a group of simulated images;
calculating the visibility sequence of the group of analog images by utilizing a surging channel algorithm to obtain the standard deviation of the analog images;
and fitting to obtain a regression coefficient of the visibility regression model based on the visibility sequence and the corresponding standard deviation, and constructing the visibility regression model based on the regression coefficient.
Optionally, after obtaining the first visibility sequence corresponding to the first target object subgraph set and the second visibility sequence corresponding to the second target object subgraph set, the method further includes:
determining a plurality of visibility combinations corresponding to the same target object from the first visibility sequence and the second visibility sequence, wherein the visibility combinations comprise a first visibility and a second visibility;
respectively calculating visibility difference values between the first visibility and the second visibility in each visibility combination;
if the visibility difference value is larger than a set value, determining that first visibility and second visibility in the visibility combination are not matched, removing the same target object from the first target object subgraph set and the second target object subgraph set respectively, and deleting the corresponding first visibility and second visibility in the visibility combination from the first visibility sequence and the second visibility sequence respectively.
Optionally, the determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence includes:
determining a visibility grade corresponding to the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence;
determining weighting coefficients corresponding to the first visibility sequence, the second visibility sequence and the third visibility sequence based on the visibility grade;
and performing weighting processing on the first visibility sequence, the second visibility sequence and the third visibility sequence according to the determined weighting coefficients to obtain the target visibility of the target image.
Optionally, after the weighting processing is performed on the first visibility sequence, the second visibility sequence, and the third visibility sequence according to the determined weighting coefficients, and before the target visibility of the target image is obtained, the method further includes:
and performing edge extraction on the target objects in the first target object sub-image set and the second target object sub-image set after the unmatched target objects are removed, and removing the target objects of which the pixel values of the image edges are smaller than a set threshold value.
Optionally, the method further comprises:
and if no target object exists in the first target object sub-image set and the second target object sub-image after the removing, determining the target visibility determined at the last moment as the target visibility of the target image at the current moment.
Optionally, after determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence, the method further includes:
calculating a difference value between the target visibility of the target image at the current moment and the target visibility of the target image at the previous moment;
and correcting the target visibility at the current moment according to the difference value of the target visibility to obtain the corrected target visibility at the current moment.
In order to solve the above technical problem, the present invention provides a visibility inversion apparatus, including:
the target image acquisition module is used for acquiring a target image of the visibility to be calculated;
a target position obtaining module, configured to obtain a first set of pixel positions corresponding to a first target object sub-image set and a second set of pixel positions corresponding to a second target sub-image set, where the first target object sub-image set and the second target object sub-image set are image regions corresponding to multiple targets extracted from the target image based on different image processing algorithms;
the first visibility calculation module is used for calculating a first standard deviation corresponding to the first pixel position set and a second standard deviation corresponding to the second pixel position set, and inputting the first standard deviation and the second standard deviation into a visibility regression model which is constructed in advance respectively to obtain a first visibility sequence corresponding to the first target object sub-graph set and a second visibility sequence corresponding to the second target object sub-graph set;
the second visibility calculation module is used for calculating a third visibility sequence of the target image based on a dark surge channel algorithm;
and the target visibility determining module is used for determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence.
Optionally, the target position obtaining module is configured to:
extracting a first target object sub-image set of the target image by using a Canny edge detection operator to obtain a first pixel position set; and the number of the first and second groups,
and extracting a second target object sub-image set of the target image based on the gray value of the gray image to obtain a second pixel position set.
Optionally, the first visibility calculation module is further configured to:
after a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set are obtained, determining a plurality of visibility combinations corresponding to the same target object from the first visibility sequence and the second visibility sequence, wherein the visibility combinations comprise first visibility and second visibility;
respectively calculating visibility difference values between the first visibility and the second visibility in each visibility combination;
if the visibility difference value is larger than a set value, determining that first visibility and second visibility in the visibility combination are not matched, removing the same target object from the first target object subgraph set and the second target object subgraph set respectively, and deleting the corresponding first visibility and second visibility in the visibility combination from the first visibility sequence and the second visibility sequence respectively.
Optionally, the target visibility determining module is configured to:
determining a visibility grade corresponding to the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence;
determining weighting coefficients corresponding to the first visibility sequence, the second visibility sequence and the third visibility sequence based on the visibility grade;
and performing weighting processing on the first visibility sequence, the second visibility sequence and the third visibility sequence according to the determined weighting coefficients to obtain the target visibility of the target image.
Optionally, the target visibility determination module is further configured to:
after weighting processing is carried out on the first visibility sequence, the second visibility sequence and the third visibility sequence according to the determined weighting coefficients, edge extraction is carried out on the target objects in the first target object sub-image set and the second target object sub-image set after unmatched target objects are removed, the target objects with the pixel values of image edges smaller than a set threshold value are removed, and the target visibility of the target images is obtained based on the removed first target object sub-image set and the second target object sub-image set.
Optionally, the target visibility determination module is further configured to:
and if no target object exists in the first target object sub-image set and the second target object sub-image after the removing, determining the target visibility determined at the last moment as the target visibility of the target image at the current moment.
Optionally, the target visibility determination module is further configured to:
after determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence, calculating a difference value between the target visibility of the target image at the current moment and the target visibility of the target image at the previous moment;
and correcting the target visibility at the current moment according to the difference value of the target visibility to obtain the corrected target visibility at the current moment.
In order to solve the above technical problem, the present invention provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the above method when executing the computer program.
To solve the above technical problem, the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above method.
Compared with the prior art, one or more embodiments in the above scheme can have the following advantages or beneficial effects:
by applying the visibility inversion method, the visibility inversion device, the computer equipment and the storage medium, a target image of visibility to be calculated at the current moment is obtained firstly in the visibility inversion process at the current moment; obtaining a first pixel position set corresponding to a first target object sub-image set and a second pixel position set corresponding to a second target object sub-image set, wherein the first target object sub-image set and the second target object sub-image set are image regions extracted from a target image based on different image processing algorithms and corresponding to a plurality of targets; calculating a first standard deviation corresponding to the first pixel position set and a second standard deviation corresponding to the second pixel position set, and respectively inputting the first standard deviation and the second standard deviation into a visibility regression model which is constructed in advance to obtain a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set; calculating a third visibility sequence of the target image based on a surging channel algorithm; and determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence.
Therefore, in the visibility inversion process, two different extraction algorithms are selected for selecting the target object in the target image, and visibility values corresponding to the same target object subgraph obtained based on the different extraction algorithms are mutually verified, so that the problem that a single algorithm is easy to have large errors can be avoided; furthermore, the visibility calculated based on the regression model and the visibility based on the dark channel algorithm are subjected to weighted calculation to determine the final target visibility, so that the effect that the target visibility of a target image can be accurately calculated under the conditions of different visibility levels is achieved, the defect that the dark channel algorithm is only suitable for low-visibility scenes is avoided, the advantages of the dark channel algorithm can be fully utilized to calculate the visibility of low-visibility scenes, the defects of the dark channel algorithm can be made up to a certain extent, and the accurate visibility calculation result can be obtained under the scenes with high visibility.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the problems that a dark surge channel algorithm is only suitable for low visibility conditions in the prior art, so that the visibility calculation by using the dark surge channel algorithm is not accurate enough and has large errors, the embodiment of the invention provides a visibility inversion method, a device, computer equipment and a storage medium.
The visibility inversion method provided by the embodiment of the invention is explained below.
Example one
As shown in fig. 1, a flowchart of a visibility inversion method provided in the present invention may include the following steps:
step S101: and acquiring a target image of the visibility to be calculated at the current moment.
Step S102: and obtaining a first pixel position set corresponding to the first target object sub-graph set and a second pixel position set corresponding to the second target sub-graph set.
Wherein the first target object sub-image set and the second target object sub-image set are image regions corresponding to a plurality of target objects extracted from the target images based on different image processing algorithms.
In one case, the obtaining a first set of pixel positions corresponding to a first sub-image set of the target object and a second set of pixel positions corresponding to a second sub-image set of the target object includes: extracting a first target object sub-image set of the target image by using a Canny edge detection operator to obtain the first pixel position set, please refer to fig. 2; and extracting a second target object sub-image set of the target image based on the gray value of the gray image to obtain a second pixel position set.
It should be noted that the above-mentioned Canny edge detection operator extraction method and the gray value extraction method based on the gray image are only two specific forms provided by the embodiment of the present invention, and should not be construed as a limitation to the present invention, and those skilled in the art can reasonably set the operation according to specific situations in practical applications, such as specific visibility application scenarios.
Step S103: and calculating a first standard deviation corresponding to the first pixel position set and a second standard deviation corresponding to the second pixel position set, and respectively inputting the first standard deviation and the second standard deviation into a visibility regression model which is constructed in advance to obtain a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set.
In one case, the visibility regression model may be constructed in the following manner, and the specific steps include:
(1) acquiring a reference image with clear visibility;
(2) selecting a plurality of target objects from the reference image, wherein the plurality of target objects comprise short-distance target objects and long-distance target objects;
(3) removing the target objects of which the pixel values of the image edges are smaller than a first set threshold value from the plurality of target objects;
(4) acquiring pixel positions of a plurality of rejected target objects and an actual distance between each target object and a camera;
(5) adding white noise to the reference image to simulate fogging, and obtaining a group of simulated images;
(6) calculating the visibility sequence of the group of analog images by utilizing a surging channel algorithm to obtain the standard deviation of the analog images;
(7) and fitting to obtain a regression coefficient of the visibility regression model based on the visibility sequence and the corresponding standard deviation, and constructing the visibility regression model based on the regression coefficient.
For example, a cubic polynomial regression model can be established based on the above method, as shown in the following expression:
wherein vis is visibility, a, b, c and d are regression coefficients, and x is standard deviation.
Step S104: and calculating a third visibility sequence of the target image based on a surging channel algorithm.
The following introduces the algorithm of the surging channel, assuming that the atmosphere is uniform, and the relationship between the visibility and the atmospheric extinction coefficient is shown in the following expression:
wherein V is visibility, is an atmospheric extinction coefficient, and is a contrast threshold.
For the aviation work, 0.05 is generally adopted.
From the above expression, it can be seen that visibility can be obtained by obtaining the extinction coefficient from the target object to the lens. For the dark channel algorithm, it is considered that some channels and certain pixel values of non-sky areas in the image are close to 0, so the transmittance T can be considered as:
wherein img is the target building image, and A is the ambient light background brightness.
Further, the relationship between the extinction coefficient and the transmittance is:
wherein T is the transmittance and L is the distance from the target object to the camera.
Thus, the target visibility inversion formula in step S105 is obtained as follows:
step S105: and determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence.
In one implementation, the target visibility of the target image may be determined as follows: determining a visibility grade corresponding to the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence; determining weighting coefficients corresponding to the first visibility sequence, the second visibility sequence and the third visibility sequence based on the visibility grade; and performing weighting processing on the first visibility sequence, the second visibility sequence and the third visibility sequence according to the determined weighting coefficients to obtain the target visibility of the target image.
For example, if the visibility value at the previous moment is less than 3km, the weights of the first visibility sequence, the second visibility sequence and the third visibility sequence may be sequentially set to 0.2, 0.2 and 0.6 when the first visibility sequence, the second visibility sequence and the third visibility sequence are weighted and averaged; if the visibility value at the previous moment is greater than 3km and less than 5km, the weights of the first visibility sequence, the second visibility sequence and the third visibility sequence can be sequentially set to be 0.33, 0.33 and 0.34 when the first visibility sequence, the second visibility sequence and the third visibility sequence are weighted to average; if the visibility value at the previous moment is greater than 5km and less than 10km, the weights of the first visibility sequence, the second visibility sequence and the third visibility sequence can be sequentially set to be 0.4, 0.4 and 0.2 when the first visibility sequence, the second visibility sequence and the third visibility sequence are weighted to average; if the visibility value at the previous moment is greater than 10km, the weights of the first visibility sequence, the second visibility sequence and the third visibility sequence can be sequentially set to be 0.5, 0.5 and 0 when the first visibility sequence, the second visibility sequence and the third visibility sequence are weighted to average.
Further, in step S105, after the first visibility sequence, the second visibility sequence, and the third visibility sequence are weighted according to the determined weighting coefficients, and before the target visibility of the target image is obtained, the method may further include the following steps: and performing edge extraction on the target objects in the first target object sub-image set and the second target object sub-image set after the unmatched target objects are removed, and removing the target objects of which the pixel values of the image edges are smaller than a set threshold value.
In a preferred implementation manner of the present invention, if no target exists in the first target object subgraph set and the second target object subgraph after the removing, the target visibility determined at the previous moment is determined as the target visibility of the target image at the current moment, so as to ensure that the current moment is not empty of the visibility inversion result.
Therefore, in the visibility inversion process, two different extraction algorithms are selected for selecting the target object in the target image, and visibility values corresponding to the same target object subgraph obtained based on the different extraction algorithms are mutually verified, so that the problem that a single algorithm is easy to have large errors can be avoided; furthermore, the visibility calculated based on the regression model and the visibility based on the dark channel algorithm are subjected to weighted calculation to determine the final target visibility, so that the effect that the target visibility of a target image can be accurately calculated under the conditions of different visibility levels is achieved, the defect that the dark channel algorithm is only suitable for low-visibility scenes is avoided, the advantages of the dark channel algorithm can be fully utilized to calculate the visibility of low-visibility scenes, the defects of the dark channel algorithm can be made up to a certain extent, and the accurate visibility calculation result can be obtained under the scenes with high visibility.
Example two
As shown in fig. 3, another flowchart of the visibility inversion method provided in the present invention may include the following steps:
step S201: and acquiring a target image of the visibility to be calculated at the current moment.
Step S202: and obtaining a first pixel position set corresponding to the first target object sub-graph set and a second pixel position set corresponding to the second target sub-graph set.
Wherein the first target object sub-image set and the second target object sub-image set are image regions corresponding to a plurality of target objects extracted from the target images based on different image processing algorithms.
Step S203: and calculating a first standard deviation corresponding to the first pixel position set and a second standard deviation corresponding to the second pixel position set, and respectively inputting the first standard deviation and the second standard deviation into a visibility regression model which is constructed in advance to obtain a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set.
Step S204: and determining a plurality of visibility combinations corresponding to the same target object from the first visibility sequence and the second visibility sequence, wherein the visibility combinations comprise a first visibility and a second visibility.
Step S205: and respectively calculating the visibility difference value between the first visibility and the second visibility in each visibility combination.
Step S206: if the visibility difference value is larger than a set value, determining that first visibility and second visibility in the visibility combination are not matched, removing the same target object from the first target object subgraph set and the second target object subgraph set respectively, and deleting the corresponding first visibility and second visibility in the visibility combination from the first visibility sequence and the second visibility sequence respectively.
Step S207: and calculating a third visibility sequence of the target image based on a surging channel algorithm.
Step S208: and determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence.
It should be noted that the method embodiment shown in fig. 3 has all the advantages of the method embodiment shown in fig. 1, in addition, the method embodiment shown in fig. 3 is further improved on the basis of the method embodiment shown in fig. 1, specifically, after the first visibility sequence and the second visibility sequence are obtained based on the visibility regression model, visibility values in the respective sequences are not directly applied to perform mutual verification, but the visibility values of the respective visibility values are screened first, and the visibility that does not meet the set value is removed, so that the accuracy of calculating the final target visibility is further improved.
EXAMPLE III
As shown in fig. 4, another flowchart of the visibility inversion method provided in the present invention may include the following steps:
step S301: and acquiring a target image of the visibility to be calculated at the current moment.
Step S302: and obtaining a first pixel position set corresponding to the first target object sub-graph set and a second pixel position set corresponding to the second target sub-graph set.
Wherein the first target object sub-image set and the second target object sub-image set are image regions corresponding to a plurality of target objects extracted from the target images based on different image processing algorithms.
Step S303: and calculating a first standard deviation corresponding to the first pixel position set and a second standard deviation corresponding to the second pixel position set, and respectively inputting the first standard deviation and the second standard deviation into a visibility regression model which is constructed in advance to obtain a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set.
Step S304: and calculating a third visibility sequence of the target image based on a surging channel algorithm.
Step S305: and determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence.
Step S306: and calculating the difference value between the target visibility of the target image at the current moment and the target visibility of the target image at the previous moment.
Step S307: and correcting the target visibility at the current moment according to the difference value of the target visibility to obtain the corrected target visibility at the current moment.
It can be understood that, in the process of comparing the target visibility at the current moment with the target visibility at the previous moment, since the change trend of the visibility itself, which is gradually clear or gradually blurred, is indefinite, the target visibility at the current moment may be higher than the target visibility at the previous moment, and a situation that the target visibility is lower than the target visibility at the previous moment may also occur. Therefore, in the process of calibrating the target visibility at the current moment by using the target visibility at the previous moment, an adjustment proportion can be set, for example, the adjustment proportion can be set to be 10%, so that if the target visibility at the current moment is greater than the target visibility at the last moment by 10%, the target visibility at the last moment is multiplied by 110% to be used as the target visibility at the current moment; on the contrary, if the target visibility at the current moment is less than 10% of the target visibility at the previous moment, the target visibility at the previous moment is multiplied by 90% to be used as the target visibility at the current moment; and if the difference value between the target visibility at the current moment and the target visibility at the previous moment is between 90 and 110 percent, directly outputting the target visibility at the current moment, namely indicating that the value of the current visibility is accurate without calibration.
The visibility inversion apparatus provided in the embodiment of the present invention is explained below.
Example four
As shown in fig. 5, a block diagram of a visibility inversion apparatus provided in an embodiment of the present invention includes:
a target image obtaining module 410, configured to obtain a target image of visibility to be calculated;
a target position obtaining module 420, configured to obtain a first set of pixel positions corresponding to a first target object sub-image set and a second set of pixel positions corresponding to a second target object sub-image set, where the first target object sub-image set and the second target object sub-image set are image regions corresponding to multiple targets extracted from the target image based on different image processing algorithms;
a first visibility calculation module 430, configured to calculate a first standard deviation corresponding to the first pixel position set and a second standard deviation corresponding to the second pixel position set, and input the first standard deviation and the second standard deviation into a visibility regression model that is constructed in advance, to obtain a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set;
the second visibility calculation module 440 is configured to calculate a third visibility sequence of the target image based on a surging channel algorithm;
a target visibility determining module 450, configured to determine target visibility of the target image according to the first visibility sequence, the second visibility sequence, and the third visibility sequence.
In one case, the target position obtaining module 420 is configured to extract a first target object sub-image set of the target image by using a Canny edge detection operator to obtain the first pixel position set; and extracting a second target object sub-image set of the target image based on the gray value of the gray image to obtain a second pixel position set.
In one case, the first visibility calculating module 430 is further configured to determine, after obtaining a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set, multiple visibility combinations corresponding to the same target object from the first visibility sequence and the second visibility sequence, where the visibility combinations include a first visibility and a second visibility; respectively calculating visibility difference values between the first visibility and the second visibility in each visibility combination; if the visibility difference value is larger than a set value, determining that first visibility and second visibility in the visibility combination are not matched, removing the same target object from the first target object subgraph set and the second target object subgraph set respectively, and deleting the corresponding first visibility and second visibility in the visibility combination from the first visibility sequence and the second visibility sequence respectively.
In another case, the target visibility determining module 450 is configured to determine, according to the first visibility sequence, the second visibility sequence, and the third visibility sequence, a visibility level corresponding to the target image; determining weighting coefficients corresponding to the first visibility sequence, the second visibility sequence and the third visibility sequence based on the visibility grade; and performing weighting processing on the first visibility sequence, the second visibility sequence and the third visibility sequence according to the determined weighting coefficients to obtain the target visibility of the target image.
In another case, the target visibility determining module 450 is further configured to, after performing weighting processing on the first visibility sequence, the second visibility sequence, and the third visibility sequence according to the determined weighting coefficients, perform edge extraction on the target objects in the first target object sub-graph set and the second target object sub-graph set from which the unmatched target objects are removed, remove the target objects whose image edges have pixel values smaller than a set threshold, and obtain the target visibility of the target image based on the removed first target object sub-graph set and the removed second target object sub-graph set.
In another case, the target visibility determining module 450 is further configured to determine, if no target exists in the first target object subgraph set and the second target object subgraph after the removing, the target visibility determined at the previous moment as the target visibility of the target image at the current moment.
In another situation, the target visibility determining module 450 is further configured to calculate a difference between the target visibility of the target image at the current moment and the target visibility of the target image at the previous moment after determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence; and correcting the target visibility at the current moment according to the difference value of the target visibility to obtain the corrected target visibility at the current moment.
By applying the visibility inversion device provided by the invention, in the visibility inversion process at the current moment, a target image of the visibility to be calculated at the current moment is obtained; obtaining a first pixel position set corresponding to a first target object sub-image set and a second pixel position set corresponding to a second target object sub-image set, wherein the first target object sub-image set and the second target object sub-image set are image regions extracted from a target image based on different image processing algorithms and corresponding to a plurality of targets; calculating a first standard deviation corresponding to the first pixel position set and a second standard deviation corresponding to the second pixel position set, and respectively inputting the first standard deviation and the second standard deviation into a visibility regression model which is constructed in advance to obtain a first visibility sequence corresponding to the first target object subgraph set and a second visibility sequence corresponding to the second target object subgraph set; calculating a third visibility sequence of the target image based on a surging channel algorithm; and determining the target visibility of the target image according to the first visibility sequence, the second visibility sequence and the third visibility sequence.
Therefore, in the visibility inversion process, two different extraction algorithms are selected for selecting the target object in the target image, and visibility values corresponding to the same target object subgraph obtained based on the different extraction algorithms are mutually verified, so that the problem that a single algorithm is easy to have large errors can be avoided; furthermore, the visibility calculated based on the regression model and the visibility based on the dark channel algorithm are subjected to weighted calculation to determine the final target visibility, so that the effect that the target visibility of a target image can be accurately calculated under the conditions of different visibility levels is achieved, the defect that the dark channel algorithm is only suitable for low-visibility scenes is avoided, the advantages of the dark channel algorithm can be fully utilized to calculate the visibility of low-visibility scenes, the defects of the dark channel algorithm can be made up to a certain extent, and the accurate visibility calculation result can be obtained under the scenes with high visibility.
EXAMPLE five
To solve the above technical problem, the present invention provides a computer device, as shown in fig. 6, including a memory 510, a processor 520, and a computer program stored on the memory and executable on the processor, wherein the processor executes the computer program to implement the method as described above.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer device may include, but is not limited to, a processor 520, a memory 510. Those skilled in the art will appreciate that fig. 6 is merely an example of a computing device and is not intended to be limiting and may include more or fewer components than those shown, or some of the components may be combined, or different components, e.g., the computing device may also include input output devices, network access devices, buses, etc.
The Processor 520 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 510 may be an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. The memory 510 may also be an external storage device of a computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the computer device. Further, the memory 510 may also include both internal storage units and external storage devices of the computer device. The memory 510 is used for storing the computer programs and other programs and data required by the computer device. The memory 510 may also be used to temporarily store data that has been output or is to be output.
EXAMPLE six
The embodiment of the present application further provides a computer-readable storage medium, which may be a computer-readable storage medium contained in the memory in the foregoing embodiment; or it may be a computer-readable storage medium that exists separately and is not incorporated into a computer device. The computer-readable storage medium stores one or more computer programs which, when executed by a processor, implement the methods described above.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory 510, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
For system or apparatus embodiments, since they are substantially similar to method embodiments, they are described in relative simplicity, and reference may be made to some descriptions of method embodiments for related points.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
It is to be understood that the terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a described condition or event is detected" may be interpreted, depending on the context, to mean "upon determining" or "in response to determining" or "upon detecting a described condition or event" or "in response to detecting a described condition or event".
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.