CN111553862B - Defogging and binocular stereoscopic vision positioning method for sea and sky background image - Google Patents
Defogging and binocular stereoscopic vision positioning method for sea and sky background image Download PDFInfo
- Publication number
- CN111553862B CN111553862B CN202010359252.9A CN202010359252A CN111553862B CN 111553862 B CN111553862 B CN 111553862B CN 202010359252 A CN202010359252 A CN 202010359252A CN 111553862 B CN111553862 B CN 111553862B
- Authority
- CN
- China
- Prior art keywords
- image
- value
- area
- sky
- defogging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000001914 filtration Methods 0.000 claims abstract description 8
- 238000005457 optimization Methods 0.000 claims abstract description 5
- 238000010586 diagram Methods 0.000 claims description 12
- 238000002834 transmittance Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 4
- 239000003086 colorant Substances 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 230000000903 blocking effect Effects 0.000 claims description 2
- 230000011218 segmentation Effects 0.000 abstract description 3
- 238000007670 refining Methods 0.000 abstract 1
- 230000000694 effects Effects 0.000 description 14
- 238000003384 imaging method Methods 0.000 description 11
- 239000011159 matrix material Substances 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003702 image correction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004451 qualitative analysis Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
- 230000003313 weakening effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application discloses a defogging and binocular stereoscopic vision positioning method for a sea-sky background image, which belongs to the field of computer vision, and is characterized in that on the basis of a dark channel defogging model, a sky area and a non-sky area in an image shot by a binocular camera are segmented according to the characteristics of the sea-sky background image, a final area of an atmospheric light value estimated value is determined by using a quartering method, a single value is prevented from being easily influenced by external random conditions, and the average value of all pixels in the selected final area is used as the atmospheric light value of an optimization model; and then obtaining a region with similar depth of field and fog concentration by utilizing super-pixel segmentation, constructing a cost function for measuring the contrast and information loss amount of the image, calculating the minimum value of the cost function of each region as the estimated value of the transmissivity of the region, and then refining the estimated value by adopting guide filtering to obtain the transmissivity value of the optimized model.
Description
Technical Field
The application relates to the field of computer vision, in particular to a defogging and binocular stereoscopic vision positioning method for a sea and sky background image.
Background
Image defogging is an indispensable technology in the field of remote monitoring and computer vision. In severe weather conditions, such as heavy rain and haze, the images acquired by the vision system are prone to attenuation or loss of characteristic information. Compared with the land, the offshore environment is complex and changeable, and sea fog weather is more common weather, so that a visual system of the offshore unmanned aircraft can be influenced by sea fog weather when acquiring image information and video images, and phenomena such as contrast reduction, color distortion and blurring are generated, so that the parallax image accuracy obtained by object identification and stereo matching is seriously influenced, and the positioning accuracy is further influenced. When the unmanned aircraft remotely observes, the target object happens to be in the sea-sky boundary area, and the unique characteristic of the sea image can also influence the defogging effect of the image.
The dark channel theory proposed by He et al is a defogging method based on image restoration, and authors prove that the defogging effect on a foggy image is very ideal by adopting a large number of experiments, so that the method becomes an important research direction of the image restoration defogging method. However, when the dark channel defogging algorithm acts on the sky area, transition areas and color cast phenomena can be generated, so that the defogging effect of the dark channel defogging algorithm in the sky area is not ideal. In view of this problem, some improved methods based on He dark channel defogging algorithms are developed, and the improved algorithms mostly achieve the effect of improving defogging of a sky area by weakening the processing of the sky area, but this can lead to that the area connected on the sea and the sky is weakened, and the detailed information of the area is lost.
Disclosure of Invention
According to the problems existing in the prior art, the application discloses a defogging and binocular stereoscopic vision positioning method for a sea-sky background image, which comprises the following steps:
s1, acquiring an original foggy image by using a binocular camera;
s2, dividing a sky area and a non-sky area in an original foggy image, and optimizing the atmospheric light value and the transmissivity value on the basis of a dark channel defogging model to obtain an improved dark channel defogging model;
s3, adopting an improved dark channel defogging model to defog an original foggy image to obtain a defogged image;
s4, performing stereo matching on the defogged left image and the defogged right image to obtain a parallax image;
and S5, obtaining the distance between the binocular camera and the target object according to the relation between the parallax image and the depth, and realizing target positioning.
Further, the atmospheric light value taking method comprises the following steps:
s2-1, dividing a sky area of an original foggy image by adopting a quartering method;
s2-2, taking the difference between the average value of pixels in the area and the standard deviation of the pixels as a measurement standard, and selecting the area with the largest difference in the divided sky area;
s2-3, further dividing the area with the largest difference value, selecting the area with the largest difference value again until the finally divided area is smaller than a preset threshold value, and taking the area as a final area of the pre-estimation of the atmospheric light value;
s2-4, selecting the pixel average value of the area as the predicted value of the optimal atmospheric light value.
Further, the formula of the estimated value of the optimal atmospheric light value is as follows:
where n is the number of pixels in the region and S (x) is the sum of all the pixel values in the region.
Further, the step of optimizing the transmittance is as follows:
s3-1, dividing an original foggy image into pixel blocks with similar textures, colors and brightness by utilizing SLIC super-pixel division, wherein each pixel block has similar depth of field and fog concentration;
s3-2: and processing the blocking effect by adopting a guide filtering mode to obtain the refined transmissivity.
Further, the guided filtering model is as follows:
wherein omega k Is a guidance diagram I i A neighborhood centered on pixel k; (a) k ,b k ) In the neighborhood omega k Wherein is a constant, q i To output an image.
Due to the adoption of the technical scheme, the defogging and binocular stereoscopic vision positioning method for the sea and sky background image provided by the application adopts an improved He Kai bright and dark channel defogging algorithm, so that the interference of sea and sky weather on a vision system is reduced; the binocular vision system is combined with the defogging model, so that clearer images with richer detailed information can be provided in sea fog weather, and the accuracy of a subsequent parallax image is improved; the binocular stereoscopic vision positioning technology is adopted, so that the distance between the unmanned ship and the obstacle can be prejudged, and the sailing safety is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may be obtained according to the drawings without inventive effort to those skilled in the art.
FIG. 1 is a general flow chart of the present application;
FIG. 2 is a defogging flow chart according to the present application;
FIG. 3 (a) is a transmission demonstration image artwork of the present application;
FIG. 3 (b) is a rough estimate of the transmittance of the present application;
FIG. 3 (c) is a transmittance optimization graph of the present application;
FIG. 4 is a flow chart of disparity map acquisition according to the present application;
FIG. 5 is a disparity map and depth map according to the present application;
fig. 6 (a) is an original of the picture a;
fig. 6 (b) is an effect diagram of the He algorithm used for picture a;
FIG. 6 (c) is an effect diagram of the method employed by panel a;
fig. 6 (d) original of picture b;
fig. 6 (e) is an effect diagram of the He algorithm used for picture b;
fig. 6 (f) is an effect diagram of the present method.
Detailed Description
In order to make the technical scheme and advantages of the present application more clear, the technical scheme in the embodiment of the present application is clearly and completely described below with reference to the accompanying drawings in the embodiment of the present application:
the sea-sky background image is characterized in that: when the unmanned ship performs long-distance observation, the sky and the sea area at a distance are connected into a line, so that an observed target is positioned at the junction of the sea antennae.
FIG. 1 is a general flow chart of the present application; a defogging and binocular stereoscopic vision positioning method for a sea-sky background image comprises the following steps:
s1, acquiring an original foggy image by using a binocular camera;
s2, dividing a sky area and a non-sky area in an original foggy image, optimizing the atmospheric light value and the transmissivity value on the basis of a dark channel defogging model proposed by He Kaiming to obtain an improved dark channel model, wherein FIG. 2 is a defogging flow chart of the application;
(1) Dividing a sky area and a non-sky area in an original foggy image;
according to priori knowledge, when the unmanned ship on the water surface shoots in a long distance, the sky area is necessarily located above the image, the brightness of the sky area is obviously higher than that of the non-sky area, and the difference value between the brightness values in the sky area is small, so that the area appears to be relatively smooth. In addition, the texture information of the ship on the sea or the scenery on the shore is rich, so that the communication area with higher brightness and more gradual gradient change can be marked as a sky area. The segmentation step of the sky area and the non-sky area in the image comprises the following steps: firstly, converting RGB color images shot by a binocular camera into gray images and performing noise reduction treatment to improve the image quality; secondly, extracting edge information in the image by adopting a Canny edge detection algorithm; then performing expansion-before-corrosion operation on the image with the detected edge information to achieve the effect of enhancing the edge information of the image; and finally, performing inverse operation on the obtained image to obtain the sky and non-sky areas.
(2) A dark channel defogging model;
in the field of computer vision, a widely used image fogging model is shown in the following formula:
I(x)=J(x)t(x)+A(1-t(x)) (1)
wherein x is the spatial coordinates of the image pixel; i (x) is an original hazy image shot by a binocular camera; j (x) is the haze-free image to be restored; a is global atmospheric illumination intensity; t (x) is the transmittance of the medium at the image pixel coordinate x.
The transformation of formula (1) can be obtained:
as can be seen from equation (2), the purpose of the defogging algorithm based on image restoration is to estimate the atmospheric light value a and the transmittance t (x) from the original foggy image I (x), and the prior theory of dark channels of the image J (x) after defogging according to the image fogging model considers that, in most local areas other than sky, pixels in a certain area always have pixel values of at least one color channel very low, that is, the minimum value of the light intensity in the area is very small, even tends to be 0, that is:
J dark (x)→0 (4)
in the formula (3), J c Each channel representing a color channel; Ω (x) represents a window centered on the pixel x.
Combining formulas (3), (4) can obtain:
since the atmospheric light value a+.0, it is possible to:
dividing both sides of formula 1 by A, and taking the minimum value on both sides, and combining with formula (6) to obtain a predicted value of transmittance:
(3) Optimizing an atmospheric light value;
according to the physical meaning of an atmospheric light value, the existing dark channel defogging model generally selects a pixel value of a highest brightness point in an image as an approximate estimated value of the pixel value, but an error between the estimated value and an actual value is unavoidable, and a single value is selected to be easily influenced by an external random condition, and in the image containing a sky region, the atmospheric light value is selected to be closest to the actual value in the sky region, so the sky region of an original foggy image is divided by adopting a quartering method, a difference value between an average value of pixels in the region and a standard deviation of the pixels is used as a measurement standard, a region with the largest difference value is selected, the region with the largest difference value is further divided, the region with the largest difference value is selected again, the previous operation is repeated until the finally divided region is smaller than a preset threshold value, the region is used as a pre-estimated final region of the atmospheric light value, and the pixel average value of the region is selected as a pre-estimated value of the optimal atmospheric light value, namely:
in the formula (8), n is the number of pixels in the region, and S (x) is the sum of all the pixel values in the region.
(4) Optimizing the transmissivity;
a large number of existing experiments show that the contrast ratio of the haze-free image is higher than that of the haze-free image, the cost function is designed by combining the existing transmissivity estimation method on the basis of the characteristics, and the minimum value of the cost function is calculated and used as a candidate value of the transmissivity after optimization.
The application firstly uses SLIC super pixel segmentation to segment the original foggy image into pixel blocks with similar textures, colors and brightness, and the pixel blocks have similar depth of field and fog concentration, so that depth of field mutation cannot occur in each small block, but the method assumes that the transmissivity of each small block is unchanged, so that the estimated result has blockiness and has certain deviation, so that the application adopts a guide filtering mode to obtain finer transmissivity, and the optimized transmissivity is used, and the guide filtering output and the guide graph are linearly related, namely:
wherein omega k Is a guidance diagram I i A neighborhood centered on pixel k; (a) k ,b k ) In the neighborhood omega k Is constant, the guided filtering seeks an optimized coefficient (a) by the difference between the image to be filtered and the output image k ,b k ) So as to minimize the mean square error between the output image and the input image, and obtain an output image q i FIG. 3 (a) is a transmission demonstration image artwork of the present application; FIG. 3 (b) is a rough estimate of the transmittance of the present application; FIG. 3 (c) is a transmittance optimization graph of the present application.
S3, adopting an improved dark channel defogging model to defog an original foggy image to obtain a defogged image;
s4, performing stereo matching on the defogged left image and the defogged right image to obtain a parallax image;
FIG. 4 is a flow chart of disparity map acquisition according to the present application; the parallax image acquisition process of binocular stereoscopic vision comprises the following steps: the method comprises the steps of image acquisition, camera calibration, image correction, stereo matching and positioning, wherein the calibration of camera parameters is a very critical link in the application of machine vision, the accuracy of a calibration result can greatly influence a parallax image obtained by stereo matching, and further the positioning precision of a target object is influenced; on the other hand, the method is used for solving the distortion coefficient of the camera in the imaging process and correcting the image; the model for recovering three-dimensional information of the target object from the pixel point coordinates by using the internal and external parameters of the camera obtained by calibration is as follows:
in the formula (10), d x And d y Is the physical size of each pixel in the camera chip on the abscissa and the ordinate; u (u) 0 And v 0 Is the abscissa and ordinate of the center of the image plane; f is the focal length between the left and right cameras of the camera; r is called a camera rotation matrix and is a matrix of 3 multiplied by 3; t is called a translation matrix of the camera, and is a 3×1 matrix; wherein f x =f/dx、f y =f/dy。
In the stereoscopic vision, the left and right images acquired can have a phenomenon of non-coplanarity due to the position difference of the left and right cameras, so that the difficulty of stereoscopic matching is increased. Correction is to process left and right images of non-coplanarity obtained by a binocular camera into images of coplanarity line alignment, wherein coplanarity line alignment refers to: when the imaging planes of the two imaging devices of the binocular system are on the same plane, and any point of the spatial points is projected to the planes of the two imaging devices, the two projection points are formed on the same line.
The object distance information is recovered from a two-dimensional image, the object distance information is the most important part of stereoscopic vision, the stereoscopic matching is to use certain characteristics of an object to be correspondingly matched in left and right images acquired by a camera, the distance-parallax between pixels of the same object in the left and right images is obtained, a parallax image is further obtained, and finally the distance information of a target object is recovered according to an imaging model of the camera and internal and external parameters of the camera.
And S5, obtaining the distance between the binocular camera and the target object according to the relation between the parallax image and the depth, and realizing target positioning.
FIG. 5 is a diagram showing the relationship between disparity map and depth, O l 、O r Is the principal point of the imaging plane of the left and right cameras, and the coordinates are (c) x ,c y ) And (c' x ,c' y ) The method comprises the steps of carrying out a first treatment on the surface of the The imaging points of the target point P (X, Y, Z) on the imaging planes of the left and right cameras are respectively P l And P r The coordinate values of the two imaging points in the physical coordinate system of the image are (x l ,y l ) And (x) r ,y r ) The method comprises the steps of carrying out a first treatment on the surface of the Parallax d=x of target point l -x r The method comprises the steps of carrying out a first treatment on the surface of the b is the distance between the camera optical centers; f is the focal length of the two imaging planes.
According to the binocular camera imaging principle and the triangle similarity principle, the relation between the parallax map and the depth can be obtained, and the relation is shown in the following formula:
the deduction can be obtained:
in order to illustrate the effectiveness and feasibility of the method, the algorithm of the application is compared with the He algorithm, the obtained defogging effect comparison chart is shown in fig. 6, and fig. 6 (a) is an original chart of a picture a; fig. 6 (b) is an effect diagram of the He algorithm used for picture a; FIG. 6 (c) is an effect diagram of the method employed by panel a; fig. 6 (d) original of picture b; fig. 6 (e) is an effect diagram of the He algorithm used for picture b; fig. 6 (f) is an effect diagram of the present method.
The pixel size of a in fig. 6 a adopted in the simulation is 690 x 517, the pixel size of d in fig. 6 is 550 x 362, and the experimental result shows that the He algorithm can generate distortion phenomenon when being applied to the sky area, so that the contrast ratio of the sky area is too high.
As shown in table 1, the application performs qualitative analysis on the defogging image quality from evaluation indexes such as information Entropy, structural similarity SSIM, mean square error MSE and the like, wherein the information Entropy represents the average information quantity contained in the image, and the larger the value of the Entropy is, the larger the information quantity of the image is represented, and the more detail is; the mean square error is a measure reflecting the effective information holding capacity before and after image processing, and the smaller the value is, the stronger the information holding capacity of the processed image is; the structural similarity measures the similarity of the restored image and the original image, and the larger the SSIM value is, the higher the similarity of the two images is.
TABLE 1
The foregoing is only a preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art, who is within the scope of the present application, should make equivalent substitutions or modifications according to the technical scheme of the present application and the inventive concept thereof, and should be covered by the scope of the present application.
Claims (1)
1. A defogging and binocular stereoscopic vision positioning method for a sea-sky background image is characterized by comprising the following steps of: the method comprises the following steps:
s1, acquiring an original foggy image by using a binocular camera;
s2, dividing a sky area and a non-sky area in an original foggy image, and optimizing the atmospheric light value and the transmissivity value on the basis of a dark channel defogging model to obtain an improved dark channel defogging model;
s3, adopting an improved dark channel defogging model to defog an original foggy image to obtain a defogged image;
s4, performing stereo matching on the defogged left image and the defogged right image to obtain a parallax image;
s5, obtaining the distance between the binocular camera and the target object according to the relation between the parallax image and the depth, and realizing target positioning;
the atmosphere light value taking method comprises the following steps:
s2-1, dividing a sky area of an original foggy image by adopting a quartering method;
s2-2, taking the difference between the average value of pixels in the area and the standard deviation of the pixels as a measurement standard, and selecting the area with the largest difference in the divided sky area;
s2-3, further dividing the area with the largest difference value, selecting the area with the largest difference value again until the finally divided area is smaller than a preset threshold value, and taking the area as a final area of the pre-estimation of the atmospheric light value;
s2-4, selecting a pixel average value of the area as a predicted value of an optimal atmospheric light value;
the formula of the predicted value of the optimal atmospheric light value is as follows:
wherein n is the number of pixels in the region, and S (x) is the sum of all pixel values in the region;
the transmittance optimization step is as follows:
s3-1, dividing an original foggy image into pixel blocks with similar textures, colors and brightness by utilizing SLIC super-pixel division, wherein each pixel block has similar depth of field and fog concentration;
s3-2: processing the blocking effect by adopting a guide filtering mode to obtain refined transmissivity;
the guided filtering model is as follows:
wherein omega k Is a guidance diagram I i A neighborhood centered on pixel k; (a) k ,b k ) In the neighborhood omega k Wherein is a constant, q i To output an image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010359252.9A CN111553862B (en) | 2020-04-29 | 2020-04-29 | Defogging and binocular stereoscopic vision positioning method for sea and sky background image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010359252.9A CN111553862B (en) | 2020-04-29 | 2020-04-29 | Defogging and binocular stereoscopic vision positioning method for sea and sky background image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111553862A CN111553862A (en) | 2020-08-18 |
CN111553862B true CN111553862B (en) | 2023-10-13 |
Family
ID=72007964
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010359252.9A Active CN111553862B (en) | 2020-04-29 | 2020-04-29 | Defogging and binocular stereoscopic vision positioning method for sea and sky background image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111553862B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112188059B (en) * | 2020-09-30 | 2022-07-15 | 深圳市商汤科技有限公司 | Wearable device, intelligent guiding method and device and guiding system |
CN113379619B (en) * | 2021-05-12 | 2022-02-01 | 电子科技大学 | Integrated processing method for defogging imaging, visibility extraction and depth of field estimation |
CN113487516B (en) * | 2021-07-26 | 2022-09-06 | 河南师范大学 | Defogging processing method for image data |
CN114332682B (en) * | 2021-12-10 | 2024-06-04 | 青岛杰瑞工控技术有限公司 | Marine panorama defogging target identification method |
CN115100408B (en) * | 2022-06-22 | 2024-09-20 | 上海大学 | Construction method of sea area scene hazy image dataset |
CN116523801B (en) * | 2023-07-03 | 2023-08-25 | 贵州医科大学附属医院 | Intelligent monitoring method for nursing premature infants |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104794688A (en) * | 2015-03-12 | 2015-07-22 | 北京航空航天大学 | Single image defogging method and device based on depth information separation sky region |
CN108876743A (en) * | 2018-06-26 | 2018-11-23 | 中山大学 | A kind of image rapid defogging method, system, terminal and storage medium |
CN110207650A (en) * | 2019-05-31 | 2019-09-06 | 重庆迪星天科技有限公司 | Automobile-used highway height-limiting frame height measurement method and device based on binocular vision |
CN110211067A (en) * | 2019-05-27 | 2019-09-06 | 哈尔滨工程大学 | One kind being used for UUV Layer Near The Sea Surface visible images defogging method |
-
2020
- 2020-04-29 CN CN202010359252.9A patent/CN111553862B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104794688A (en) * | 2015-03-12 | 2015-07-22 | 北京航空航天大学 | Single image defogging method and device based on depth information separation sky region |
CN108876743A (en) * | 2018-06-26 | 2018-11-23 | 中山大学 | A kind of image rapid defogging method, system, terminal and storage medium |
CN110211067A (en) * | 2019-05-27 | 2019-09-06 | 哈尔滨工程大学 | One kind being used for UUV Layer Near The Sea Surface visible images defogging method |
CN110207650A (en) * | 2019-05-31 | 2019-09-06 | 重庆迪星天科技有限公司 | Automobile-used highway height-limiting frame height measurement method and device based on binocular vision |
Non-Patent Citations (3)
Title |
---|
ZHAO Hong 等."Algorithm for Coding Unit Partition in 3D Animation Using High Efficiency Video Coding Based on Canny Operator Segment".《Journal of digital information management》.2016,全文. * |
苏丽 等."一种改进的全景海雾图像去雾算法".《计算机仿真》.2016,第33卷(第11期),全文. * |
郭青山 等."基于DehazeNet 与边缘检测均值引导滤波图像去雾算法".《传感器与微系统》.2020,第39卷(第1期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111553862A (en) | 2020-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111553862B (en) | Defogging and binocular stereoscopic vision positioning method for sea and sky background image | |
CN108682026B (en) | Binocular vision stereo matching method based on multi-matching element fusion | |
CN108596849B (en) | Single image defogging method based on sky region segmentation | |
CN105374019B (en) | A kind of more depth map fusion methods and device | |
Tripathi et al. | Single image fog removal using bilateral filter | |
CN107133927B (en) | Single image defogging method based on mean-square error dark channel under super-pixel frame | |
CN110232666A (en) | Underground piping image rapid defogging method based on dark primary priori | |
CN109118446B (en) | Underwater image restoration and denoising method | |
CN104794697B (en) | A kind of image defogging method based on dark primary priori | |
CN107622480B (en) | Kinect depth image enhancement method | |
WO2013018101A1 (en) | Method and system for removal of fog, mist or haze from images and videos | |
CN111738941B (en) | Underwater image optimization method integrating light field and polarization information | |
CN107527325B (en) | Monocular underwater vision enhancement method based on dark channel priority | |
CN107705258B (en) | Underwater image enhancement method based on three-primary-color combined pre-equalization and deblurring | |
CN110689490A (en) | Underwater image restoration method based on texture color features and optimized transmittance | |
CN111833258A (en) | Image color correction method based on double-transmittance underwater imaging model | |
CN103226816A (en) | Haze image medium transmission rate estimation and optimization method based on quick gaussian filtering | |
Wang et al. | Single-image dehazing using color attenuation prior based on haze-lines | |
CN106504216B (en) | Single image to the fog method based on Variation Model | |
CN113379619B (en) | Integrated processing method for defogging imaging, visibility extraction and depth of field estimation | |
CN111091501A (en) | Parameter estimation method of atmosphere scattering defogging model | |
Lai et al. | Single image dehazing with optimal transmission map | |
CN116757949A (en) | Atmosphere-ocean scattering environment degradation image restoration method and system | |
KR101923581B1 (en) | Normal vector extraction apparatus and method thereof based on stereo vision for hull underwater inspection using underwater robot | |
CN115439349A (en) | Underwater SLAM optimization method based on image enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |