CN116894791B - Visual SLAM method and system for enhancing image under low illumination condition - Google Patents
Visual SLAM method and system for enhancing image under low illumination condition Download PDFInfo
- Publication number
- CN116894791B CN116894791B CN202310960245.8A CN202310960245A CN116894791B CN 116894791 B CN116894791 B CN 116894791B CN 202310960245 A CN202310960245 A CN 202310960245A CN 116894791 B CN116894791 B CN 116894791B
- Authority
- CN
- China
- Prior art keywords
- image
- illumination
- brightness
- self
- image enhancement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000005286 illumination Methods 0.000 title claims abstract description 91
- 230000000007 visual effect Effects 0.000 title claims abstract description 37
- 238000000034 method Methods 0.000 title claims abstract description 27
- 230000002708 enhancing effect Effects 0.000 title abstract description 7
- 238000012545 processing Methods 0.000 claims abstract description 11
- 238000010276 construction Methods 0.000 claims abstract description 7
- 230000006870 function Effects 0.000 claims description 29
- 230000005540 biological transmission Effects 0.000 claims description 11
- 208000006440 Open Bite Diseases 0.000 claims description 6
- 230000004913 activation Effects 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 238000009499 grossing Methods 0.000 claims description 6
- 230000003287 optical effect Effects 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 238000003672 processing method Methods 0.000 claims description 3
- 230000004888 barrier function Effects 0.000 abstract description 4
- 230000004438 eyesight Effects 0.000 description 8
- 238000005457 optimization Methods 0.000 description 6
- 238000000354 decomposition reaction Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000013178 mathematical model Methods 0.000 description 2
- 230000000750 progressive effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/17—Terrestrial scenes taken from planes or by drones
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of unmanned aerial vehicles, and particularly discloses a visual SLAM method and a system for enhancing images under a low illumination condition, wherein the method comprises the following steps: according to the illumination intensity judgment algorithm, judging the brightness of each frame of image; judging whether the current environment of the unmanned aerial vehicle reaches a threshold value of an image enhancement algorithm to be started or not according to the gray level histogram of the image brightness; the image enhancement algorithm is a convolutional network learning model based on the Retinex theory, and a self-calibration module is added for reducing the calculated amount; when the threshold value is reached, processing the image obtained by the binocular camera through an image enhancement algorithm; and transmitting the image processed by the image enhancement algorithm to a visual odometer for image construction and obstacle avoidance. The unmanned aerial vehicle can independently avoid the barrier flight in the dim and unknown complex environment, and the safety and stability of the unmanned aerial vehicle in the low illumination environment are improved.
Description
Technical Field
The invention relates to the technical field of unmanned aerial vehicles, in particular to a visual SLAM method and system for image enhancement under a low illumination condition.
Background
The low-illumination image enhancement aims at improving the visual perception quality of captured data under the scene of insufficient illumination so as to acquire more information, gradually becomes a research hot spot in the field of image processing, and has very wide application prospects in the industries related to artificial intelligence such as automatic driving, security protection and the like. Traditional low-illumination image enhancement technology often needs high mathematical skills and strict mathematical derivation, and the derived iterative process is generally complex in flow and is not beneficial to practical application. With the successive advent of large-scale data sets, low-light image enhancement based on deep learning has become the current mainstream technology, however, such technology is limited by data distribution, and has the problems of unstable performance, single application scene and the like.
Low-light image enhancement is a classical task in image processing, and has received a great deal of attention in both academia and industry. Most of the traditional visual SLAM uses RGB images to be matched with IMU and the like to perform multi-sensor fusion, and the problems of low positioning accuracy, large track deviation and the like exist under the condition of low illumination, so that the positioning effect of the visual odometer is greatly affected.
Therefore, how to solve the problems of low positioning accuracy, large track offset and the like in the prior art is a technical problem to be solved urgently by those skilled in the art.
Disclosure of Invention
The invention provides a visual SLAM method for enhancing an image under a low illumination condition, which comprises the following steps:
step S101: according to the illumination intensity judgment algorithm, judging the brightness of each frame of image;
step S102: judging whether the environment where the unmanned aerial vehicle is positioned reaches a threshold p requiring starting an image enhancement algorithm according to the gray level histogram of the image brightness;
the image enhancement algorithm is a convolutional network learning model based on the Retinex theory, and a self-calibration module is added for reducing the calculated amount;
step S103: when the threshold value p is reached, processing the image obtained by the binocular camera through an image enhancement algorithm;
step S104: and transmitting the image processed by the image enhancement algorithm to a visual odometer for image construction and obstacle avoidance.
In some specific embodiments, the step S101 further includes:
setting a brightness threshold v, recording as dark pixels when the brightness of a certain pixel is lower than the threshold, dividing the number of the dark pixels by the total number of the pixels of the picture to obtain a percentage l, and judging the dimming degree of the current environment by comparing the value of the percentage l.
In some embodiments, the image enhancement algorithm comprises:
the illumination learning, self-calibration module and the unsupervised loss function with shared weight gradually optimize the illumination and contrast of the image to obtain the image with higher definition and without overexposure as much as possible, and the original format of the image is not changed.
In some embodiments, a completely new feedforward correction network structure is introduced into the self-calibration module, wherein the structure comprises three convolution kernels, namely K 1 、K 2 、K 3 Three filters F 1 、F 2 、F 3 The self-calibration module is used for smoothing input and a pooling layer with a step length of r, and the formula expression of the self-calibration module is as follows:
wherein x is t Is the illumination of the current stage, z t Is passed through a filter F 2 The illumination s obtained after t Is x t And z t Superimposed illumination, p t Is s t Through Sigmoid activation function pair K 3 The features after convolution extraction are calibrated to obtain self-corrected illumination, w t Is passed through a filter F 3 The illumination obtained after that, r t Is the calibrated input for the next stage.
In some embodiments, the unsupervised loss function is formulated to constrain the photometric loss at each stage as follows:
wherein M is t Is a shielding Mask, M 1 Non-occlusion Mask, ψ is the robust penalty function, I t 、I t+1 I.e. to indicate the difference in luminosity, V f (p) refers to forward optical flow.
In order to achieve the above object, the present application further provides a visual SLAM system for enhancing an image under a low light condition, including:
and a brightness judging module: the brightness judgment method is used for judging the brightness of each frame of image according to an illumination intensity judgment algorithm;
an algorithm starting module: the method comprises the steps of judging whether the environment where the unmanned aerial vehicle is positioned reaches a threshold p for starting an image enhancement algorithm according to a gray level histogram of the image brightness;
the image enhancement algorithm is a convolutional network learning model based on the Retinex theory, and a self-calibration module is added for reducing the calculated amount;
an image processing module: the image processing method is used for processing the image obtained by the binocular camera through an image enhancement algorithm after the threshold p is reached;
and an image transmission module: and the image processed by the image enhancement algorithm is transmitted to a visual odometer for image construction and obstacle avoidance.
In some embodiments, the brightness determination module is further configured to:
setting a brightness threshold v, recording as dark pixels when the brightness of a certain pixel is lower than the threshold, dividing the number of the dark pixels by the total number of the pixels of the picture to obtain a percentage l, and judging the dimming degree of the current environment by comparing the value of the percentage l.
In some embodiments, the image enhancement algorithm comprises:
the illumination learning, self-calibration module and the unsupervised loss function with shared weight gradually optimize the illumination and contrast of the image to obtain the image with higher definition and without overexposure as much as possible, and the original format of the image is not changed.
In some embodiments, a new feedforward correction network structure is introduced into the self-calibration module, and the junction is formed by the self-calibration moduleThe structure comprises three convolution kernels, K 1 、K 2 、K 3 Three filters F 1 、F 2 、F 3 The self-calibration module is used for smoothing input and a pooling layer with a step length of r, and the formula expression of the self-calibration module is as follows:
wherein x is t Is the illumination of the current stage, z t Is passed through a filter F 2 The illumination s obtained after t Is x t And z t Superimposed illumination, p t Is s t Through Sigmoid activation function pair K 3 The features after convolution extraction are calibrated to obtain self-corrected illumination, w t Is passed through a filter F 3 The illumination obtained after that, r t Is the calibrated input for the next stage.
In some embodiments, the unsupervised loss function is formulated to constrain the photometric loss at each stage as follows:
wherein M is t Is a shielding Mask, M 1 Non-occlusion Mask, ψ is the robust penalty function, I t 、I t+1 I.e. to indicate the difference in luminosity, V f (p) refers to forward optical flow.
The beneficial effects of the technical scheme are that:
(1) The unmanned aerial vehicle can independently avoid the barrier flight in the dim and unknown complex environment, and the safety and stability of the unmanned aerial vehicle in the low illumination environment are improved.
(2) The low-light image enhancement algorithm model used in the method can be applied to unmanned aerial vehicle equipment, and can be widely applied to various dim scenes to improve the accuracy and the robustness of visual SLAM.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a visual SLAM method for enhancing an image under low illumination conditions according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a visual SLAM system for enhancing images under low light conditions according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a self-calibration module of a visual SLAM method and system for image enhancement under low light conditions according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments.
Examples of the embodiments are illustrated in the accompanying drawings, wherein like or similar symbols indicate like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present invention and should not be construed as limiting the invention.
Example 1
One embodiment of the present invention provides a visual SLAM method for image enhancement under low light conditions, as shown with reference to fig. 1, comprising:
step S101: according to the illumination intensity judgment algorithm, judging the brightness of each frame of image;
in a specific embodiment of the present invention, the step S101 further includes:
setting a brightness threshold v, recording as dark pixels when the brightness of a certain pixel is lower than the threshold, dividing the number of the dark pixels by the total number of the pixels of the picture to obtain a percentage l, and judging the dimming degree of the current environment by comparing the value of the percentage l.
Step S102: judging whether the current environment of the unmanned aerial vehicle reaches a threshold p requiring starting an image enhancement algorithm according to the gray level histogram of the image brightness, wherein the image enhancement algorithm is a convolution network learning model based on the Retinex theory, and a self-calibration module is added for reducing the calculated amount.
In a specific embodiment of the present invention, the image enhancement algorithm includes: :
the illumination learning, self-calibration module and the unsupervised loss function with shared weight gradually optimize the illumination and contrast of the image to obtain the image with higher definition and without overexposure as much as possible, and the original format of the image is not changed.
Specifically, the low-illumination image enhancement algorithm is based on a lightweight illumination learning model of the Retinex theory, comprises an illumination learning module with weight sharing, a self-calibration module and an unsupervised loss function, can progressively optimize the illumination and contrast of an image, can obtain an image with higher definition and without overexposure as far as possible, and does not change the original format of the image.
Specifically, the optimization of the image illuminance by the lightweight illumination learning model of the Retinex theory is based on the Retinex theory. The Retinex theory, which in fact belongs to one type of image decomposition, multiplies the image decomposition into illumination and reflection components, constructing a mathematical model:
S(x,y)=R(x,y)·L(x,y)
retinex theory considers that the image S (x, y) is equal to the luminance component R (x, y) times the reflection component L (x, y). The illumination component comprises the general outline and intensity distribution of the scene in the image, and the reflection component represents the essential properties of the image, including all edge detail colors of the image, etc.
Specifically, according to Retinex theory, i.e., S (x, y) =r (x, y) ·l (x, y). In the method based on the model design, the estimation of illumination is generally regarded as a main optimization target, and after accurate illumination is obtained, a clear image can be directly obtained according to the relationship. The model adopts a progressive illumination optimization process, and basically realizes the following steps:
wherein u is t And x t The residual and illumination at stage t are shown, respectively. X is x 0 Representing the initial value. H θ Representing the illumination estimation network, i.e. the learning illumination process, θ represents the training weight. It should be noted that here H θ Irrespective of the number of stages, i.e. the illumination estimation network maintains a structure and parameter sharing state at each stage, i.e. the same H is used in each iteration θ 。
In one embodiment of the present invention, referring to FIG. 3, a completely new feedforward correction network structure is introduced into the self-calibration module, wherein the structure comprises three convolution kernels, K respectively 1 、K 2 、K 3 Three filters F 1 、F 2 、F 3 The self-calibration module is used for smoothing input and a pooling layer with a step length of r, and the formula expression of the self-calibration module is as follows:
wherein x is t Is the illumination of the current stage, z t Is passed through a filter F 2 The illumination s obtained after t Is x t And z t Superimposed illumination, p t Is s t Through Sigmoid activation function pair K 3 The features after convolution extraction are calibrated to obtain self-corrected illumination, w t Is passed through a filter F 3 The illumination obtained after that, r t Is the calibrated input for the next stage.
In one embodiment of the invention, the unsupervised loss function is formulated as follows to constrain the photometric loss at each stage:
wherein M is t Is a shielding Mask, M 1 Non-occlusion Mask, ψ is the robust penalty function, I t 、I t+1 I.e. to indicate the difference in luminosity, V f (p) refers to forward optical flow.
Step S103: when the threshold p is reached, the image obtained by the binocular camera is processed by an image enhancement algorithm.
In a specific embodiment of the invention, the vision SLAM obstacle avoidance assembly comprises a binocular camera, wherein the binocular camera is used for acquiring a vision image of the advancing direction of the unmanned aerial vehicle and transmitting the vision image to the onboard processor, and the onboard processor completes the vision SLAM and obstacle avoidance functions in real time; the visual odometer at the front end of the unmanned aerial vehicle adopts ORB-SLAM3, and the current pose of the unmanned aerial vehicle and the output sparse point cloud information are estimated to be used by a rear-end autonomous planner through information fusion of the binocular image enhanced by the low-light enhancement model and the IMU. And (3) carrying out real-time path planning by adopting an A algorithm through the point cloud information output by the visual odometer.
Step S104: and transmitting the image processed by the image enhancement algorithm to a visual odometer for image construction and obstacle avoidance.
In a specific embodiment of the invention, in actual operation, the ground station of the unmanned aerial vehicle is turned on, and a power switch, a data transmission switch, a graph transmission switch and an on-board processor switch are started, and the graph transmission receiver is connected with a computer and is connected with the unmanned aerial vehicle through the ground station. At this time, the ground station can be used for checking whether the states of the unmanned plane, such as the picture transmission and the like are normal or not, and checking whether the laser radar display height is normal or not.
Specifically, an onboard processor is connected through ssh, SLAM mapping is started through ROS, and the aircraft is shaken left and right to see whether the Vins-Function display is normal or not, and whether the communication of each topic is normal or not is checked. And starting an image brightness judging algorithm, wherein the brightness judging algorithm can automatically start an image enhancement algorithm to improve the brightness and contrast of the image if the environment is dim.
And setting a flying track waypoint at the ground station, and inputting a command through the ROS to enable the aircraft to enter an offboard mode and take off. At the moment, no one can fly towards the initial navigation point, and the obstacle avoidance is performed in real time through an A-algorithm and a Vins-Function visual inertial navigation odometer in the figure.
Specifically, after the unmanned aerial vehicle flies to reach the reconnaissance area, the flight mode is switched to be a fixed-point flight mode, and stable hovering is realized through laser radar auxiliary height setting. Waiting for unmanned aerial vehicle gesture to be steady, adjusting the cloud platform angle, preparing to take photo by plane image acquisition.
The unmanned aerial vehicle can independently avoid the barrier flight in the dim and unknown complex environment, and the safety and stability of the unmanned aerial vehicle in the low illumination environment are improved. The low-light image enhancement algorithm model used in the method can be applied to unmanned aerial vehicle equipment, and can be widely applied to various dim scenes to improve the accuracy and the robustness of visual SLAM.
Example two
One embodiment of the present invention provides a visual SLAM system for image enhancement in low light conditions, as shown with reference to fig. 2, comprising:
and a brightness judging module: and the brightness judgment is carried out on each frame of image according to the illumination intensity judgment algorithm.
In a specific embodiment of the present invention, the brightness determination module is further configured to:
setting a brightness threshold v, recording as dark pixels when the brightness of a certain pixel is lower than the threshold, dividing the number of the dark pixels by the total number of the pixels of the picture to obtain a percentage l, and judging the dimming degree of the current environment by comparing the value of the percentage l.
An algorithm starting module: the method comprises the steps of judging whether the environment where the unmanned aerial vehicle is positioned reaches a threshold p for starting an image enhancement algorithm according to a gray level histogram of the image brightness;
the image enhancement algorithm is a convolutional network learning model based on the Retinex theory, and a self-calibration module is added for reducing the calculated amount.
In a specific embodiment of the present invention, the image enhancement algorithm includes: :
the illumination learning, self-calibration module and the unsupervised loss function with shared weight gradually optimize the illumination and contrast of the image to obtain the image with higher definition and without overexposure as much as possible, and the original format of the image is not changed.
Specifically, the low-illumination image enhancement algorithm is based on a lightweight illumination learning model of the Retinex theory, comprises an illumination learning module with weight sharing, a self-calibration module and an unsupervised loss function, can progressively optimize the illumination and contrast of an image, can obtain an image with higher definition and without overexposure as far as possible, and does not change the original format of the image.
Specifically, the optimization of the image illuminance by the lightweight illumination learning model of the Retinex theory is based on the Retinex theory. The Retinex theory, which in fact belongs to one type of image decomposition, multiplies the image decomposition into illumination and reflection components, constructing a mathematical model:
S(x,y)=R(x,y)·L(x,y)
retinex theory considers that the image S (x, y) is equal to the luminance component R (x, y) times the reflection component L (x, y). The illumination component comprises the general outline and intensity distribution of the scene in the image, and the reflection component represents the essential properties of the image, including all edge detail colors of the image, etc.
Specifically, according to Retinex theory, i.e., S (x, y) =r (x, y) ·l (x, y). In the method based on the model design, the estimation of illumination is generally regarded as a main optimization target, and after accurate illumination is obtained, a clear image can be directly obtained according to the relationship. The model adopts a progressive illumination optimization process, and basically realizes the following steps:
wherein u is t And x t The residual and illumination at stage t are shown, respectively. X is x 0 Representing the initial value. H θ Representing the illumination estimation network, i.e. the learning illumination process, θ represents the training weight. It should be noted that here H θ Irrespective of the number of stages, i.e. the illumination estimation network maintains a structure and parameter sharing state at each stage, i.e. the same H is used in each iteration θ 。
In one embodiment of the present invention, referring to FIG. 3, a completely new feedforward correction network structure is introduced into the self-calibration module, wherein the structure comprises three convolution kernels, K respectively 1 、K 2 、K 3 Three filters F 1 、F 2 、F 3 The self-calibration module is used for smoothing input and a pooling layer with a step length of r, and the formula expression of the self-calibration module is as follows:
wherein x is t Is the illumination of the current stage, z t Is passed through a filter F 2 The illumination s obtained after t Is x t And z t Superimposed illumination, p t Is s t Through Sigmoid activation function pair K 3 The features after convolution extraction are calibrated to obtain self-corrected illumination, w t Is passed through a filter F 3 The illumination obtained after that, r t Is the calibrated input for the next stage.
In one embodiment of the invention, the unsupervised loss function is formulated as follows to constrain the photometric loss at each stage:
wherein M is t Is a shielding Mask, M 1 Non-occlusion Mask, ψ is the robust penalty function, I t 、I t+1 I.e. to indicate the difference in luminosity, V f (p) refers to forward optical flow.
An image processing module: for processing the image obtained by the binocular camera by means of an image enhancement algorithm when the threshold p is reached.
In a specific embodiment of the invention, the vision SLAM obstacle avoidance assembly comprises a binocular camera, wherein the binocular camera is used for acquiring a vision image of the advancing direction of the unmanned aerial vehicle and transmitting the vision image to the onboard processor, and the onboard processor completes the vision SLAM and obstacle avoidance functions in real time; the visual odometer at the front end of the unmanned aerial vehicle adopts ORB-SLAM3, and the current pose of the unmanned aerial vehicle and the output sparse point cloud information are estimated to be used by a rear-end autonomous planner through information fusion of the binocular image enhanced by the low-light enhancement model and the IMU. And (3) carrying out real-time path planning by adopting an A algorithm through the point cloud information output by the visual odometer.
And an image transmission module: and the image processed by the image enhancement algorithm is transmitted to a visual odometer for image construction and obstacle avoidance.
In a specific embodiment of the invention, in actual operation, the ground station of the unmanned aerial vehicle is turned on, and a power switch, a data transmission switch, a graph transmission switch and an on-board processor switch are started, and the graph transmission receiver is connected with a computer and is connected with the unmanned aerial vehicle through the ground station. At this time, the ground station can be used for checking whether the states of the unmanned plane, such as the picture transmission and the like are normal or not, and checking whether the laser radar display height is normal or not.
Specifically, an onboard processor is connected through ssh, SLAM mapping is started through ROS, and the aircraft is shaken left and right to see whether the Vins-Function display is normal or not, and whether the communication of each topic is normal or not is checked. And starting an image brightness judging algorithm, wherein the brightness judging algorithm can automatically start an image enhancement algorithm to improve the brightness and contrast of the image if the environment is dim.
And setting a flying track waypoint at the ground station, and inputting a command through the ROS to enable the aircraft to enter an offboard mode and take off. At the moment, no one can fly towards the initial navigation point, and the obstacle avoidance is performed in real time through an A-algorithm and a Vins-Function visual inertial navigation odometer in the figure.
Specifically, after the unmanned aerial vehicle flies to reach the reconnaissance area, the flight mode is switched to be a fixed-point flight mode, and stable hovering is realized through laser radar auxiliary height setting. Waiting for unmanned aerial vehicle gesture to be steady, adjusting the cloud platform angle, preparing to take photo by plane image acquisition.
The unmanned aerial vehicle can independently avoid the barrier flight in the dim and unknown complex environment, and the safety and stability of the unmanned aerial vehicle in the low illumination environment are improved. The low-light image enhancement algorithm model used in the method can be applied to unmanned aerial vehicle equipment, and can be widely applied to various dim scenes to improve the accuracy and the robustness of visual SLAM.
In the description of the present specification, reference to the terms "one embodiment," "some embodiments," "examples," "particular examples," "one particular embodiment," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, one of ordinary skill in the art will appreciate that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not drive the essence of the corresponding technical solutions to depart from the spirit and scope of the technical solutions of the embodiments of the present application.
Claims (8)
1. A visual SLAM method of image enhancement in low light conditions, comprising:
step S101: according to the illumination intensity judgment algorithm, judging the brightness of each frame of image;
step S102: judging whether the environment where the unmanned aerial vehicle is positioned reaches a threshold p requiring starting an image enhancement algorithm according to the gray level histogram of the image brightness;
the image enhancement algorithm is a convolutional network learning model based on the Retinex theory, and a self-calibration module is added for reducing the calculated amount;
step S103: when the threshold value p is reached, processing the image obtained by the binocular camera through an image enhancement algorithm;
step S104: transmitting the image processed by the image enhancement algorithm to a visual odometer for image construction and obstacle avoidance;
introducing a brand new feedforward correction network structure into the self-calibration module, wherein the structure comprises three convolution kernels, namely K 1 、K 2 、K 3 Three filters F 1 、F 2 、F 3 The self-calibration module is used for smoothing a pooling layer with an input step length of r, and the formula expression of the self-calibration module is as follows:
wherein x is t Is the illumination of the current stage, z t Is passed through a filter F 2 The illumination s obtained after t Is x t And z t Superimposed illumination, p t Is s t Through Sigmoid activation function pair K 3 The features after convolution extraction are calibrated to obtain self-corrected illumination, w t Is passed through a filter F 3 The illumination obtained after that, r t Is the calibrated input for the next stage.
2. The visual SLAM method of claim 1, wherein step S101 further comprises:
setting a brightness threshold v, recording as dark pixels when the brightness of a certain pixel is lower than the threshold v, dividing the number of the dark pixels by the total number of the pixels of the picture to obtain a percentage l, and judging the dimming degree of the current environment by comparing the value of the percentage l.
3. The visual SLAM method of image enhancement under low light conditions of claim 1, wherein the image enhancement algorithm comprises:
the illumination learning, self-calibration module and the unsupervised loss function with shared weight gradually optimize the illumination and contrast of the image to obtain the image with higher definition and without overexposure as much as possible, and the original format of the image is not changed.
4. A visual SLAM method for image enhancement under low light conditions according to claim 3, wherein said unsupervised loss function is formulated to constrain the photometric loss at each stage as follows:
wherein M is t Is a shielding Mask, M 1 Non-occlusion Mask, ψ is the robust penalty function, I t 、I t+1 I.e. to indicate the difference in luminosity, V f (p) refers to forward optical flow.
5. A visual SLAM system for image enhancement in low light conditions, comprising:
and a brightness judging module: the brightness judgment method is used for judging the brightness of each frame of image according to an illumination intensity judgment algorithm;
an algorithm starting module: the method comprises the steps of judging whether the environment where the unmanned aerial vehicle is positioned reaches a threshold p for starting an image enhancement algorithm according to a gray level histogram of the image brightness;
the image enhancement algorithm is a convolutional network learning model based on the Retinex theory, and a self-calibration module is added for reducing the calculated amount;
an image processing module: the image processing method is used for processing the image obtained by the binocular camera through an image enhancement algorithm after the threshold p is reached;
and an image transmission module: the image processing method is used for transmitting the image processed by the image enhancement algorithm to a visual odometer for image construction and obstacle avoidance;
introducing a brand new feedforward correction network structure into the self-calibration module, wherein the structure packageComprises three convolution kernels, K 1 、K 2 、K 3 Three filters F 1 、F 2 、F 3 The self-calibration module is used for smoothing a pooling layer with an input step length of r, and the formula expression of the self-calibration module is as follows:
wherein x is t Is the illumination of the current stage, z t Is passed through a filter F 2 The illumination s obtained after t Is x t And z t Superimposed illumination, p t Is s t Through Sigmoid activation function pair K 3 The features after convolution extraction are calibrated to obtain self-corrected illumination, w t Is passed through a filter F 3 The illumination obtained after that, r t Is the calibrated input for the next stage.
6. The low-light condition image-enhanced visual SLAM system of claim 5, wherein the brightness determination module is further configured to:
setting a brightness threshold v, recording as dark pixels when the brightness of a certain pixel is lower than the threshold, dividing the number of the dark pixels by the total number of the pixels of the picture to obtain a percentage l, and judging the dimming degree of the current environment by comparing the value of the percentage l.
7. The low-light condition image-enhanced visual SLAM system of claim 5, wherein the image enhancement algorithm comprises:
the illumination learning, self-calibration module and the unsupervised loss function with shared weight gradually optimize the illumination and contrast of the image to obtain the image with higher definition and without overexposure as much as possible, and the original format of the image is not changed.
8. The low-light condition image-enhanced visual SLAM system of claim 7, wherein the unsupervised loss function is formulated to constrain the photometric loss at each stage as follows:
wherein M is t Is a shielding Mask, M 1 Non-occlusion Mask, ψ is the robust penalty function, I t 、I t+1 I.e. to indicate the difference in luminosity, V f (p) refers to forward optical flow.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310960245.8A CN116894791B (en) | 2023-08-01 | 2023-08-01 | Visual SLAM method and system for enhancing image under low illumination condition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310960245.8A CN116894791B (en) | 2023-08-01 | 2023-08-01 | Visual SLAM method and system for enhancing image under low illumination condition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116894791A CN116894791A (en) | 2023-10-17 |
CN116894791B true CN116894791B (en) | 2024-02-09 |
Family
ID=88312066
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310960245.8A Active CN116894791B (en) | 2023-08-01 | 2023-08-01 | Visual SLAM method and system for enhancing image under low illumination condition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116894791B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112287861A (en) * | 2020-11-05 | 2021-01-29 | 山东交通学院 | Road information enhancement and driving early warning method based on night environment perception |
CN112396073A (en) * | 2019-08-15 | 2021-02-23 | 广州虎牙科技有限公司 | Model training method and device based on binocular images and data processing equipment |
CN115526811A (en) * | 2022-11-28 | 2022-12-27 | 电子科技大学中山学院 | Adaptive vision SLAM method suitable for variable illumination environment |
CN115619670A (en) * | 2022-10-18 | 2023-01-17 | 佛山市南海区广工大数控装备协同创新研究院 | Method, system and related equipment for enhancing low-light image |
CN115861101A (en) * | 2022-11-29 | 2023-03-28 | 福州大学 | Low-illumination image enhancement method based on depth separable convolution |
-
2023
- 2023-08-01 CN CN202310960245.8A patent/CN116894791B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112396073A (en) * | 2019-08-15 | 2021-02-23 | 广州虎牙科技有限公司 | Model training method and device based on binocular images and data processing equipment |
CN112287861A (en) * | 2020-11-05 | 2021-01-29 | 山东交通学院 | Road information enhancement and driving early warning method based on night environment perception |
CN115619670A (en) * | 2022-10-18 | 2023-01-17 | 佛山市南海区广工大数控装备协同创新研究院 | Method, system and related equipment for enhancing low-light image |
CN115526811A (en) * | 2022-11-28 | 2022-12-27 | 电子科技大学中山学院 | Adaptive vision SLAM method suitable for variable illumination environment |
CN115861101A (en) * | 2022-11-29 | 2023-03-28 | 福州大学 | Low-illumination image enhancement method based on depth separable convolution |
Also Published As
Publication number | Publication date |
---|---|
CN116894791A (en) | 2023-10-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200272835A1 (en) | Intelligent driving control method, electronic device, and medium | |
CN108255198B (en) | Shooting cradle head control system and control method under unmanned aerial vehicle flight state | |
CN109145747B (en) | Semantic segmentation method for water surface panoramic image | |
CN105631831B (en) | Video image enhancing method under the conditions of a kind of haze | |
CN108229587B (en) | Autonomous transmission tower scanning method based on hovering state of aircraft | |
CN103186887B (en) | Image demister and image haze removal method | |
WO2020103108A1 (en) | Semantic generation method and device, drone and storage medium | |
CN112508814B (en) | Image tone restoration type defogging enhancement method based on unmanned aerial vehicle at low altitude visual angle | |
CN107958465A (en) | A kind of single image to the fog method based on depth convolutional neural networks | |
TW202226141A (en) | Image dehazing method and image dehazing apparatus using the same | |
CN111753739B (en) | Object detection method, device, equipment and storage medium | |
WO2021097848A1 (en) | Image processing method, image collection apparatus, movable platform and storage medium | |
CN111161154A (en) | Real-time and rapid orthoscopic splicing system and method for videos of unmanned aerial vehicle | |
CN104331867B (en) | The method, device and mobile terminal of image defogging | |
CN114821506A (en) | Multi-view semantic segmentation method and device, electronic equipment and storage medium | |
Grijalva et al. | Landmark-based virtual path estimation for assisted UAV FPV tele-operation with augmented reality | |
CN116894791B (en) | Visual SLAM method and system for enhancing image under low illumination condition | |
WO2021026855A1 (en) | Machine vision-based image processing method and device | |
CN112819874B (en) | Depth information processing method, apparatus, device, storage medium, and program product | |
CN113298177A (en) | Night image coloring method, device, medium, and apparatus | |
CN117934326A (en) | Single-image defogging method based on Transformer and aerial fixed-wing unmanned aerial vehicle | |
CN115512264A (en) | Unmanned aerial vehicle high-speed detection method based on twin tracking network and abnormal scheduler | |
CN114326821B (en) | Unmanned aerial vehicle autonomous obstacle avoidance system and method based on deep reinforcement learning | |
EP3937130A1 (en) | System and method for performing sky-segmentation | |
US20230150661A1 (en) | Horizon detection to support an aircraft on a mission in an environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |