CN111553862A - Sea-sky background image defogging and binocular stereo vision positioning method - Google Patents

Sea-sky background image defogging and binocular stereo vision positioning method Download PDF

Info

Publication number
CN111553862A
CN111553862A CN202010359252.9A CN202010359252A CN111553862A CN 111553862 A CN111553862 A CN 111553862A CN 202010359252 A CN202010359252 A CN 202010359252A CN 111553862 A CN111553862 A CN 111553862A
Authority
CN
China
Prior art keywords
image
area
defogging
value
sky
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010359252.9A
Other languages
Chinese (zh)
Other versions
CN111553862B (en
Inventor
赵红
李春艳
陈廷凯
陈浩华
田嘉禾
王荣峰
贾玉森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202010359252.9A priority Critical patent/CN111553862B/en
Publication of CN111553862A publication Critical patent/CN111553862A/en
Application granted granted Critical
Publication of CN111553862B publication Critical patent/CN111553862B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a sea-sky background image defogging and binocular stereo vision positioning method, which belongs to the field of computer vision, and is characterized in that on the basis of a dark channel defogging model, a sky region and a non-sky region in an image shot by a binocular camera are segmented according to the characteristics of a sea-sky background image, a quartering method is utilized to determine a final region of an atmospheric light value estimated value, the condition that a single value is easily influenced by an external random condition is avoided, and the average value of all pixels in the selected final region is used as the atmospheric light value of an optimization model; and then, obtaining areas with similar depth of field and fog concentration by utilizing super-pixel segmentation, constructing cost functions of the contrast and the information loss of the weighing image, calculating the minimum value of the cost function of each area as the transmittance estimated value of the area, and then refining the transmittance estimated value by adopting guide filtering to obtain the transmittance value of the optimization model.

Description

Sea-sky background image defogging and binocular stereo vision positioning method
Technical Field
The invention relates to the field of computer vision, in particular to a method for defogging a sea and sky background image and positioning binocular stereoscopic vision.
Background
Image defogging is an indispensable technology in the fields of remote monitoring and computer vision. Under severe weather conditions, such as heavy rain and haze weather, characteristic information attenuation or loss easily occurs in images acquired by a vision system. Compared with the land, the marine environment is complex and changeable, and the sea fog weather is more common weather, so that the vision system of the marine unmanned aircraft is influenced by the sea fog weather when acquiring image information and video images, the phenomena of reduced contrast, color distortion, blurring and the like are generated, the precision of a disparity map obtained by object identification and stereo matching is seriously influenced, and the positioning accuracy is further influenced. When the unmanned aircraft observes at a long distance, the target object happens to be in a sea-sky junction area, and the defogging effect of the image can be influenced due to the unique characteristic of the marine image.
He et al proposed a dark channel theory which is a defogging method based on image restoration, and authors have adopted a lot of experiments to prove that the method is ideal for defogging of a foggy image, and becomes an important research direction of the image restoration defogging method. However, the dark channel defogging algorithm generates a transition region and a color cast phenomenon when acting on the sky region, so that the defogging effect of the sky region is not ideal. Aiming at the problem, improved methods based on He dark channel defogging algorithms emerge, and the improved algorithms mostly achieve the effect of improving the defogging effect of the sky area by weakening the treatment on the sky area, but the method can also weaken the area connected with the sky and lose the detail information of the area.
Disclosure of Invention
According to the problems in the prior art, the invention discloses a method for defogging a sea and sky background image and positioning binocular stereoscopic vision, which comprises the following steps:
s1, acquiring an original foggy image by using a binocular camera;
s2, segmenting a sky region and a non-sky region in the original foggy image, and optimizing the values of atmospheric light values and transmissivity on the basis of a dark channel defogging model to obtain an improved dark channel defogging model;
s3, carrying out defogging treatment on the original foggy image by adopting an improved dark channel defogging model to obtain a defogged image;
s4, carrying out stereo matching on the defogged left image and the defogged right image to obtain a disparity map;
and S5, obtaining the distance between the binocular camera and the target object according to the relation between the parallax map and the depth, and realizing target positioning.
Further, the atmospheric light value taking method comprises the following steps:
s2-1, dividing the sky area of the original foggy image by a quartering method;
s2-2, taking the difference between the average value of the pixels in the area and the standard deviation of the pixels as a measurement standard, and selecting the area with the largest difference in the divided sky area;
s2-3, further dividing the area with the largest difference, selecting the area with the largest difference again until the finally divided area is smaller than a preset threshold value, and taking the area as a final area of the pre-estimation of the atmospheric light value;
and S2-4, selecting the average value of the pixels of the area as the estimated value of the optimal atmosphere light value.
Further, the formula of the estimated value of the optimal atmospheric light value is as follows:
Figure BDA0002474485000000021
where n is the number of pixels in the region and S (x) is the sum of all pixel values in the region.
Further, the transmittance optimization step is as follows:
s3-1, segmenting the original foggy image into pixel blocks with similar texture, color and brightness by SLIC superpixel segmentation, wherein each pixel block has similar depth of field and fog concentration;
s3-2: and processing the blocking effect by adopting a guided filtering mode to obtain refined transmittance.
Further, the model of the guided filtering is as follows:
Figure BDA0002474485000000022
wherein, ω iskIs a guide diagram IiA neighborhood centered on pixel k; (a)k,bk) In the neighborhood of ωkIs constant in (q)iTo output an image.
Due to the adoption of the technical scheme, the method for defogging the sea-sky background image and positioning the binocular stereoscopic vision provided by the invention adopts the improved defogging algorithm of the Hoeka bright-dark channel, so that the interference of sea fog weather on a visual system is reduced; the binocular vision system is combined with the defogging model, so that clearer images with richer detail information can be provided in the sea fog weather, and the precision of subsequently obtained parallax images is improved; by adopting a binocular stereoscopic vision positioning technology, the distance between the unmanned ship and the barrier can be prejudged, and the navigation safety is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a general flow diagram of the present invention;
FIG. 2 is a flow chart of the defogging process of the present invention;
FIG. 3(a) is a graph original of a transmittance demo picture according to the present invention;
FIG. 3(b) is a graph of a rough estimate of transmissivity in accordance with the present invention;
FIG. 3(c) is a graph of transmittance optimization for the present invention;
FIG. 4 is a flow chart of disparity map acquisition according to the present invention;
FIG. 5 is a disparity map and depth map according to the present invention;
FIG. 6(a) is an original drawing of picture a;
FIG. 6(b) is a He algorithm effect diagram adopted by the picture a;
FIG. 6(c) is a diagram illustrating the effect of the method adopted by the picture a;
FIG. 6(d) original drawing of Picture b;
FIG. 6(e) is a He algorithm effect diagram adopted by the picture b;
fig. 6(f) is an effect diagram of the method adopted in the picture b.
Detailed Description
In order to make the technical solutions and advantages of the present invention clearer, the following describes the technical solutions in the embodiments of the present invention clearly and completely with reference to the drawings in the embodiments of the present invention:
the sea-sky background image is characterized in that: when the unmanned ship carries out remote observation, the remote sky and the sea area are connected into a line, so that an observed target is positioned at the junction of the sea-sky and the sea-sky.
FIG. 1 is a general flow diagram of the present invention; a method for defogging and binocular stereo vision positioning of a sea and sky background image comprises the following steps:
s1, acquiring an original foggy image by using a binocular camera;
s2, segmenting a sky region and a non-sky region in an original foggy image, and optimizing values of an atmospheric light value and a transmissivity on the basis of a dark channel defogging model provided by Hommin to obtain an improved dark channel model, wherein FIG. 2 is a defogging flow chart of the invention;
(1) segmenting a sky region and a non-sky region in the original foggy image;
according to the priori knowledge, when the unmanned surface ship shoots at a long distance, the sky area exists above the image, the brightness of the sky area is obviously higher than that of the non-sky area, and the difference between the brightness values in the sky area is small, so that the area is relatively smooth. In addition, texture information of ships on the sea or sceneries on the shore is rich, so that a connected region with high brightness and gentle gradient change can be marked as a sky region. The method for segmenting the sky area and the non-sky area in the image comprises the following steps: firstly, converting an RGB color image shot by a binocular camera into a gray image and carrying out noise reduction treatment to improve the image quality; secondly, extracting edge information in the image by adopting a Canny edge detection algorithm; then, carrying out expansion-first and corrosion-second operation on the image with the detected edge information so as to achieve the effect of enhancing the edge information of the image; and finally, carrying out negation operation on the obtained image to obtain a sky region and a non-sky region.
(2) A dark channel defogging model;
in the field of computer vision, a widely used image fogging model is shown as follows:
I(x)=J(x)t(x)+A(1-t(x)) (1)
wherein x is the spatial coordinate of an image pixel; i (x) is an original foggy image captured by a binocular camera; j (x) is a fog-free image to be restored; a is the global atmospheric illumination intensity; t (x) is the transmission of the medium at image pixel coordinate x.
Transforming equation (1) can result in:
Figure BDA0002474485000000041
according to the formula (2), the defogging algorithm based on image restoration aims to estimate an atmospheric light value a and a transmittance t (x) from an original foggy image i (x), and an image j (x) after defogging is obtained according to an image fogging model, and a dark channel prior theory considers that in most of non-sky local areas, pixels in a certain area always have a low pixel value of at least one color channel, that is, the minimum value of the light intensity in the area is a small value and even tends to 0, that is:
Figure BDA0002474485000000042
Jdark(x)→0 (4)
in the formula (3), JcEach channel representing a color channel; Ω (x) denotes a window centered on the pixel x.
The combination of the formulas (3) and (4) can obtain:
Figure BDA0002474485000000043
since the atmospheric light value a ≠ 0, it can be obtained:
Figure BDA0002474485000000051
dividing both sides of equation 1 by a, and taking the minimum value of both sides, in combination with equation (6), yields an estimated value of transmittance:
Figure BDA0002474485000000052
(3) optimizing an atmospheric light value;
according to the physical meaning of the atmospheric light value, the existing dark channel defogging model usually selects the pixel value of the highest brightness point in the image as an approximate estimation value, but the error between the estimation value and the actual value is inevitable, and a single value is selected to be easily influenced by external random conditions, and in the image containing a sky area, the atmospheric light value is selected to be closest to the actual value in the sky area, so the invention adopts a quartering method to divide the sky area of the original fogging image, uses the difference between the average value of the pixels in the area and the standard difference of the pixels as a standard, selects the area with the largest difference, further divides the area with the largest difference, selects the area with the largest difference again, repeats the last operation until the finally divided area is smaller than a preset threshold, uses the area as a final area of the atmospheric light value pre-estimation, selects the pixel average value of the area as a pre-estimation value of the optimal atmospheric light value, namely:
Figure BDA0002474485000000053
in the formula (8), n is the number of pixels in the region, and s (x) is the sum of all pixel values in the region.
(4) Optimizing the transmittance;
a large number of existing experiments show that the contrast of the fog-free image is higher than that of the fog-free image, on the basis of the characteristics, the cost function is designed by combining the existing transmissivity estimation method, and the minimum value of the cost function is calculated to serve as the candidate value of the optimized transmissivity.
The method firstly utilizes SLIC super-pixel segmentation to segment an original foggy image into pixel blocks with similar texture, color and brightness, and the pixel blocks have similar depth of field and fog concentration, so that depth of field abrupt change does not occur in each small block, but the method assumes that the transmissivity of each small block is unchanged, so that the estimated result has block effect and has certain deviation, so the method adopts a guide filtering mode to obtain finer transmissivity which is used as the transmissivity after optimization, and the guide filtering output and a guide graph are linearly related, namely:
Figure BDA0002474485000000054
wherein, ω iskIs a guide diagram IiA neighborhood centered on pixel k; (a)k,bk) In the neighborhood of ωkConstant in (b), the guided filtering seeks an optimized coefficient (a) by the difference between the image to be filtered and the output imagek,bk) So as to minimize the mean square error between the output image and the input image and obtain the output image qiFIG. 3(a) is an original graph of a transmittance demonstration picture according to the present invention; FIG. 3(b) is a graph of a rough estimate of transmissivity in accordance with the present invention; FIG. 3(c) is a graph of transmittance optimization according to the present invention.
S3, carrying out defogging treatment on the original foggy image by adopting an improved dark channel defogging model to obtain a defogged image;
s4, carrying out stereo matching on the defogged left image and the defogged right image to obtain a disparity map;
FIG. 4 is a flow chart of disparity map acquisition according to the present invention; the parallax image acquisition process of binocular stereoscopic vision comprises the following steps: the method comprises the following steps of image acquisition, camera calibration, image correction, stereo matching and positioning, wherein the calibration of camera parameters is a very key link in the application of machine vision, whether the calibration result is accurate or not can greatly influence a disparity map obtained by stereo matching, so that the positioning precision of a target object is influenced, and the camera is calibrated, on one hand, the conversion relation of a three-dimensional space point under a world coordinate system and the image coordinate system, namely internal and external parameters, is solved for recovering the three-dimensional coordinate of the target object; the other aspect is to solve the distortion coefficient of the camera in the imaging process for image correction; the model for recovering the three-dimensional information of the target object from the pixel point coordinates of the target object by utilizing the calibrated internal and external parameters of the camera is as follows:
Figure BDA0002474485000000061
in the formula (10), dxAnd dyIs the physical size of each pixel in the camera chip on the abscissa and ordinate; u. of0And v0Is the abscissa and ordinate of the center of the image plane, f is the focal length between the left and right cameras, R is the camera rotation matrix, which is a matrix of 3 × 3, T is the camera translation matrix, which is a matrix of 3 × 1, where f is the x-axis and y-axis of the center of the image plane, where f is the focal length between the left and right cameras, R is the camera rotation matrix, which is a matrix of 3 ×x=f/dx、fy=f/dy。
In stereoscopic vision, due to the difference of positions of the left and right cameras, the left and right images are not coplanar, which increases the difficulty of stereoscopic matching. The correction is to process the left and right images which are not coplanar and are obtained by the binocular camera into images aligned in a coplanar line, wherein the alignment in the coplanar line is as follows: the imaging planes of the two imaging devices of the binocular system are on the same plane, and when any point of the space point is projected to the planes of the two imaging devices, the two projected points are on the same line.
The stereo matching aims at recovering object distance information from a two-dimensional image, and is the most important part of stereo vision, the stereo matching is that certain characteristics of an object are correspondingly matched in left and right images acquired by a camera to obtain the distance-parallax between pixel points of the same object in the left and right images so as to obtain a parallax image, and finally the distance information of a target object is recovered according to an imaging model of the camera and internal and external parameters of the camera.
And S5, obtaining the distance between the binocular camera and the target object according to the relation between the parallax map and the depth, and realizing target positioning.
FIG. 5 is a graph of the relationship between disparity map and depth, O, of the present inventionl、OrThe main points of the imaging planes of the left camera and the right camera are respectively provided with coordinates (c)x,cy) And (c'x,c'y) (ii) a The imaging points of the target point P (X, Y, Z) on the imaging planes of the left camera and the right camera are respectively PlAnd PrThe coordinate values of the two imaging points in the image physical coordinate system are (x)l,yl) And (x)r,yr) (ii) a The parallax d ═ x of the target pointl-xr(ii) a b is the distance between the optical centers of the cameras; f is the focal length of the two imaging planes.
According to the imaging principle of the binocular camera and the similar principle of the triangle, the relationship between the disparity map and the depth can be obtained as shown in the following formula:
Figure BDA0002474485000000071
the derivation can be found as follows:
Figure BDA0002474485000000072
in order to illustrate the effectiveness and feasibility of the method, the algorithm of the invention is compared with the He algorithm, and the obtained defogging effect comparison graph is shown in FIG. 6, wherein FIG. 6(a) is an original graph of a picture a; FIG. 6(b) is a He algorithm effect diagram adopted by the picture a; FIG. 6(c) is a diagram illustrating the effect of the method adopted by the picture a; FIG. 6(d) original drawing of Picture b; FIG. 6(e) is a He algorithm effect diagram adopted by the picture b; fig. 6(f) is an effect diagram of the method adopted in the picture b.
The experimental result shows that the He algorithm generates distortion when applied to the sky region, which results in excessively high contrast of the sky region, but the algorithm of the present invention can effectively improve the phenomenon when applied to the sky region, and the image contrast and detail information after defogging are effectively recovered.
As shown in Table 1, the quality of the defogged image is qualitatively analyzed respectively according to evaluation indexes such as information Entropy, structure similarity SSIM, mean square error MSE and the like, wherein the information Entropy represents the average information content of the image, the larger the value of the Entropy is, the larger the information content of the image is, and the richer the details are; the mean square error is a measure reflecting the effective information retention before and after image processing, and the smaller the value of the mean square error is, the stronger the information retention of the processed image is; the structural similarity measures the similarity between the restored image and the original image, and the larger the SSIM value is, the higher the similarity between the two images is.
TABLE 1
Figure BDA0002474485000000073
Figure BDA0002474485000000081
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.

Claims (5)

1. A method for defogging and binocular stereo vision positioning of a sea and sky background image is characterized by comprising the following steps: the method comprises the following steps:
s1, acquiring an original foggy image by using a binocular camera;
s2, segmenting a sky region and a non-sky region in the original foggy image, and optimizing the values of atmospheric light values and transmissivity on the basis of a dark channel defogging model to obtain an improved dark channel defogging model;
s3, carrying out defogging treatment on the original foggy image by adopting an improved dark channel defogging model to obtain a defogged image;
s4, carrying out stereo matching on the defogged left image and the defogged right image to obtain a disparity map;
and S5, obtaining the distance between the binocular camera and the target object according to the relation between the parallax map and the depth, and realizing target positioning.
2. The method for defogging and binocular stereo vision positioning of the marine background image according to claim 1, further comprising: the atmospheric light value taking method comprises the following steps:
s2-1, dividing the sky area of the original foggy image by a quartering method;
s2-2, taking the difference between the average value of the pixels in the area and the standard deviation of the pixels as a measurement standard, and selecting the area with the largest difference in the divided sky area;
s2-3, further dividing the area with the largest difference, selecting the area with the largest difference again until the finally divided area is smaller than a preset threshold value, and taking the area as a final area of the pre-estimation of the atmospheric light value;
and S2-4, selecting the average value of the pixels of the area as the estimated value of the optimal atmosphere light value.
3. The method for defogging and binocular stereo vision positioning of the marine background image according to claim 1, further comprising: the formula of the estimated value of the optimal atmospheric light value is as follows:
Figure FDA0002474484990000011
where n is the number of pixels in the region and S (x) is the sum of all pixel values in the region.
4. The method for defogging and binocular stereo vision positioning of the marine background image according to claim 1, further comprising: the transmittance optimization steps are as follows:
s3-1, segmenting the original foggy image into pixel blocks with similar texture, color and brightness by SLIC superpixel segmentation, wherein each pixel block has similar depth of field and fog concentration;
s3-2: and processing the blocking effect by adopting a guided filtering mode to obtain refined transmittance.
5. The method for defogging and binocular stereo vision positioning of the marine background image according to claim 1, further comprising: the model of the guided filtering is as follows:
Figure FDA0002474484990000021
wherein, ω iskIs a guide diagram IiA neighborhood centered on pixel k; (a)k,bk) In the neighborhood of ωkIs constant in (q)iTo output an image.
CN202010359252.9A 2020-04-29 2020-04-29 Defogging and binocular stereoscopic vision positioning method for sea and sky background image Active CN111553862B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010359252.9A CN111553862B (en) 2020-04-29 2020-04-29 Defogging and binocular stereoscopic vision positioning method for sea and sky background image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010359252.9A CN111553862B (en) 2020-04-29 2020-04-29 Defogging and binocular stereoscopic vision positioning method for sea and sky background image

Publications (2)

Publication Number Publication Date
CN111553862A true CN111553862A (en) 2020-08-18
CN111553862B CN111553862B (en) 2023-10-13

Family

ID=72007964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010359252.9A Active CN111553862B (en) 2020-04-29 2020-04-29 Defogging and binocular stereoscopic vision positioning method for sea and sky background image

Country Status (1)

Country Link
CN (1) CN111553862B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112188059A (en) * 2020-09-30 2021-01-05 深圳市商汤科技有限公司 Wearable device, intelligent guiding method and device and guiding system
CN113379619A (en) * 2021-05-12 2021-09-10 电子科技大学 Integrated processing method for defogging imaging, visibility extraction and depth of field estimation
CN113487516A (en) * 2021-07-26 2021-10-08 河南师范大学 Defogging processing method for image data
CN114332682A (en) * 2021-12-10 2022-04-12 青岛杰瑞工控技术有限公司 Marine panoramic defogging target identification method
CN116523801A (en) * 2023-07-03 2023-08-01 贵州医科大学附属医院 Intelligent monitoring method for nursing premature infants
CN114332682B (en) * 2021-12-10 2024-06-04 青岛杰瑞工控技术有限公司 Marine panorama defogging target identification method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794688A (en) * 2015-03-12 2015-07-22 北京航空航天大学 Single image defogging method and device based on depth information separation sky region
CN108876743A (en) * 2018-06-26 2018-11-23 中山大学 A kind of image rapid defogging method, system, terminal and storage medium
CN110207650A (en) * 2019-05-31 2019-09-06 重庆迪星天科技有限公司 Automobile-used highway height-limiting frame height measurement method and device based on binocular vision
CN110211067A (en) * 2019-05-27 2019-09-06 哈尔滨工程大学 One kind being used for UUV Layer Near The Sea Surface visible images defogging method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794688A (en) * 2015-03-12 2015-07-22 北京航空航天大学 Single image defogging method and device based on depth information separation sky region
CN108876743A (en) * 2018-06-26 2018-11-23 中山大学 A kind of image rapid defogging method, system, terminal and storage medium
CN110211067A (en) * 2019-05-27 2019-09-06 哈尔滨工程大学 One kind being used for UUV Layer Near The Sea Surface visible images defogging method
CN110207650A (en) * 2019-05-31 2019-09-06 重庆迪星天科技有限公司 Automobile-used highway height-limiting frame height measurement method and device based on binocular vision

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHAO HONG 等: ""Algorithm for Coding Unit Partition in 3D Animation Using High Efficiency Video Coding Based on Canny Operator Segment"", 《JOURNAL OF DIGITAL INFORMATION MANAGEMENT》 *
苏丽 等: ""一种改进的全景海雾图像去雾算法"", 《计算机仿真》, vol. 33, no. 11 *
郭青山 等: ""基于DehazeNet 与边缘检测均值引导滤波图像去雾算法"", 《传感器与微系统》, vol. 39, no. 1 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112188059A (en) * 2020-09-30 2021-01-05 深圳市商汤科技有限公司 Wearable device, intelligent guiding method and device and guiding system
CN112188059B (en) * 2020-09-30 2022-07-15 深圳市商汤科技有限公司 Wearable device, intelligent guiding method and device and guiding system
CN113379619A (en) * 2021-05-12 2021-09-10 电子科技大学 Integrated processing method for defogging imaging, visibility extraction and depth of field estimation
CN113487516A (en) * 2021-07-26 2021-10-08 河南师范大学 Defogging processing method for image data
CN114332682A (en) * 2021-12-10 2022-04-12 青岛杰瑞工控技术有限公司 Marine panoramic defogging target identification method
CN114332682B (en) * 2021-12-10 2024-06-04 青岛杰瑞工控技术有限公司 Marine panorama defogging target identification method
CN116523801A (en) * 2023-07-03 2023-08-01 贵州医科大学附属医院 Intelligent monitoring method for nursing premature infants
CN116523801B (en) * 2023-07-03 2023-08-25 贵州医科大学附属医院 Intelligent monitoring method for nursing premature infants

Also Published As

Publication number Publication date
CN111553862B (en) 2023-10-13

Similar Documents

Publication Publication Date Title
CN111553862B (en) Defogging and binocular stereoscopic vision positioning method for sea and sky background image
CN108682026B (en) Binocular vision stereo matching method based on multi-matching element fusion
CN108596849B (en) Single image defogging method based on sky region segmentation
Tripathi et al. Single image fog removal using bilateral filter
CN109685732B (en) High-precision depth image restoration method based on boundary capture
CN108898575B (en) Novel adaptive weight stereo matching method
CN108765342A (en) A kind of underwater image restoration method based on improvement dark
WO2017023210A1 (en) Generating a merged, fused three-dimensional point cloud based on captured images of a scene
CN108377374B (en) Method and system for generating depth information related to an image
CN107622480B (en) Kinect depth image enhancement method
CN108257165B (en) Image stereo matching method and binocular vision equipment
CN110853151A (en) Three-dimensional point set recovery method based on video
CN107527325B (en) Monocular underwater vision enhancement method based on dark channel priority
CN113744315B (en) Semi-direct vision odometer based on binocular vision
CN111738941A (en) Underwater image optimization method fusing light field and polarization information
CN112561996A (en) Target detection method in autonomous underwater robot recovery docking
CN104778673B (en) A kind of improved gauss hybrid models depth image enhancement method
CN113379619B (en) Integrated processing method for defogging imaging, visibility extraction and depth of field estimation
Wang et al. Single-image dehazing using color attenuation prior based on haze-lines
CN116757949A (en) Atmosphere-ocean scattering environment degradation image restoration method and system
CN113989164B (en) Underwater color image restoration method, system and storage medium
CN106097259B (en) A kind of Misty Image fast reconstructing method based on transmissivity optimisation technique
CN112598777B (en) Haze fusion method based on dark channel prior
CN115439349A (en) Underwater SLAM optimization method based on image enhancement
CN114418874A (en) Low-illumination image enhancement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant