CN112686105B - Fog concentration grade identification method based on video image multi-feature fusion - Google Patents

Fog concentration grade identification method based on video image multi-feature fusion Download PDF

Info

Publication number
CN112686105B
CN112686105B CN202011511720.6A CN202011511720A CN112686105B CN 112686105 B CN112686105 B CN 112686105B CN 202011511720 A CN202011511720 A CN 202011511720A CN 112686105 B CN112686105 B CN 112686105B
Authority
CN
China
Prior art keywords
image
feature
global
slice
extraction module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011511720.6A
Other languages
Chinese (zh)
Other versions
CN112686105A (en
Inventor
杨文臣
李春晓
戴秉佑
房锐
田毕江
胡澄宇
苏宇
李薇
李亚军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BROADVISION ENGINEERING CONSULTANTS
Original Assignee
BROADVISION ENGINEERING CONSULTANTS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BROADVISION ENGINEERING CONSULTANTS filed Critical BROADVISION ENGINEERING CONSULTANTS
Priority to CN202011511720.6A priority Critical patent/CN112686105B/en
Publication of CN112686105A publication Critical patent/CN112686105A/en
Application granted granted Critical
Publication of CN112686105B publication Critical patent/CN112686105B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a fog concentration grade identification method based on video image multi-feature fusion, which comprises the steps of firstly randomly generating a plurality of non-overlapping image slices on an original image, then processing each slice by using a Canny operator to obtain the image edge of each slice, counting the number of edge pixels of each slice, and selecting the slice with the largest number of pixels as a target slice; extracting the feature of a detail image of the target slice by using a detail feature extraction module; global image feature extraction is carried out on the original image by adopting a global image feature extraction module; and fusing and identifying the obtained detailed image characteristics and the global image characteristics by using a characteristic fusion identification module to obtain the fog concentration grade of the original image. According to the invention, through the fog concentration deep learning network integrating global and detailed multi-features, the accuracy and robustness of visibility grading judgment are improved, and the method has the technical characteristics of data driving, automatic feature identification, self-learning and the like.

Description

Fog concentration grade identification method based on video image multi-feature fusion
Technical Field
The invention belongs to the technical field of highway traffic meteorological monitoring and video image intelligent analysis, and particularly relates to a fog concentration grade identification method based on video image multi-feature fusion.
Background
The intelligent traffic monitoring system has the advantages that the technology of video perception, image recognition, embedded artificial intelligent chips and the like is upgraded in a revolutionary mode, video intelligent cognition and intelligent traffic management are fused and energized mutually, a problem analysis model is formed by deeply mining relevant data of traffic images, a real-time dynamic information service system is constructed, intelligent decision and hierarchical travel service is provided for industry management, public service and the like, and the intelligent traffic monitoring system is important content for intelligent expressway construction.
In recent years, a plurality of fog concentration identification methods based on fog scene pictures are emerging, and the fog concentration identification methods mainly comprise the following steps: camera model calibration, dark channel prior, dual brightness difference, etc. The camera model calibration method is to directly use a road as a target object, determine an extinction coefficient of atmosphere through a real-time graphic processing program, and calculate the atmospheric visibility by using a Koschmieder law. The disadvantage of this method is that it requires precise geometric calibration of the camera and a high contrast reference in the scene. The dark channel prior method obtains the transmissivity of a target object to an imaging point according to a dark channel prior theory, derives an atmospheric extinction coefficient by utilizing the transmissivity, and further estimates the visibility. The current research shows that the transmission obtained by the method is not accurate enough, and the real-time performance of the optimization algorithm is not good. The dual brightness difference method calculates the visibility by utilizing the ratio of the background brightness difference between two targets at different distances near the horizon and the corresponding horizontal sky. All the fog concentration estimation algorithms take the visibility estimation as an entry point, and are easily influenced by uneven fog, so that the estimation result has larger error; in addition, an important problem of the method is poor in robustness and limited by a strong scene. Therefore, how to overcome the defects of the prior art is a problem to be solved urgently in the technical field of road traffic meteorological monitoring and video image intelligent analysis at present.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a fog concentration level identification method based on video image multi-feature fusion.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a fog concentration grade identification method based on video image multi-feature fusion comprises the following steps:
randomly generating a plurality of non-overlapping image slices on an original image, processing each slice by using a Canny operator to obtain the image edge of each slice, counting the number of edge pixels of each slice, and selecting the slice with the largest number of pixels as a target slice;
step (2), a detail feature extraction module is adopted to extract detail image features of the target slice;
the detail feature extraction module comprises 8 convolution layers, 4 pooling layers and 1 full-connection layer; introducing a BN layer after each convolution layer, and adopting eRelu as an activation function; setting a pooling layer after every two continuous convolution operations; finally, generating an N x 1-dimensional feature vector through a full connection layer, wherein the vector is the extracted detailed image feature;
step (3), a global image feature extraction module is adopted to extract global image features of the original image;
the global image feature extraction module comprises 5 convolutional layers, 2 pooling layers and 1 full-connection layer; introducing a BN layer after each convolution layer, and adopting eRelu as an activation function; setting a pooling layer after the third convolution operation and the last convolution operation respectively; finally, outputting an n x 1-dimensional feature vector through a full connection layer, wherein the vector is the extracted global image feature;
and (4) fusing and identifying the detailed image characteristics and the global image characteristics obtained in the step (2) by adopting a characteristic fusion identification module to obtain the fog concentration grade of the original image.
Further, it is preferable that in step (1), 10 non-overlapping image slices are randomly generated on the original image.
Further, it is preferable that in the step (3), a void convolution kernel of 5 × 5 is used in the first convolution layer; the second convolutional layer used a 3 x 3 void convolutional kernel.
Further, it is preferable that in the step (4), the fusion is performed by a cascade fusion method.
Further, it is preferable to train three models simultaneously using the same training set and test set: (1) the global feature extraction module inputs feature data of four channels of RGB and dark channels; (2) a detail feature extraction module that inputs image data after being sliced; (3) and the input of the global image feature extraction module is a global feature and a local feature vector.
Further, it is preferable that the training parameters: (1) learning rate is 0.001.Dacay is a 0.0005 drop every 7 epochs; adopting an SGD optimization method, wherein the impulse is 0.9; the loss function is Cross Encopy; the data preprocessing method comprises the following steps: a random rotation of the image of-20 to 20 degrees was performed.
In the identification of the present invention, after the feature fusion is performed, the fused feature needs to be input to a discriminator for final category judgment. In order to better control the quantity of parameters, the method adopts a GAP (gap average posing) method to construct a discriminator, processes 3 convolution layers and a global average pooling layer, and calculates the probability of each type corresponding to a sample by a softmax function to realize the final concentration identification of the fog scene image.
The innovation points of the invention are as follows:
(a) a self-adaptive image slicing method is provided, the Canny operator is adopted to obtain the image edge, the number of edge pixels is used as the evaluation standard of image information, and the slice with the largest information amount is obtained.
(b) The characteristic fusion-based fog concentration level identification network EnvNet is composed of a global image characteristic extraction module, a detail characteristic extraction module and a characteristic fusion identification module. Wherein: acquiring abundant image global characteristics in a multi-channel fusion mode to solve the problem of insufficient information quantity of an original RGB image caused by image fading; the expressive local image slice is obtained through a self-adaptive image slice method, the detail characteristics of the fog scene image are obtained through a convolution module, and the identification basis of the model is increased; the global and detailed characteristics are fused and then sent into a fog concentration identification network, so that high-quality fog concentration grade discrimination is realized.
(3) The intelligent visibility identification algorithm mainly based on the deep convolutional neural network and assisted by the traditional feature extraction is constructed, the algorithm generalization capability is stronger, and the intelligent visibility identification algorithm is more suitable for various complex environments and scenes such as night and adverse weather.
Compared with the prior art, the invention has the beneficial effects that:
the method provided by the invention abandons the idea of judging fog concentration through visibility sensing detection and visibility estimation in the past, and adopts an image recognition method to directly divide a scene image into clear, light fog, thick fog and other different fog concentration levels (note that the reference standard of fog concentration is national standard GB/T31445-2015, as shown in Table 1), so that while a mathematical model is simplified, the accuracy and robustness of visibility grading judgment are improved by fusing a fog concentration deep learning network with global and detailed multiple features, and the method has the technical characteristics of data driving, automatic feature recognition, self-learning and the like.
It is preferable to provide the effect with quantized data.
Drawings
FIG. 1 is a diagram of an EnvNet network structure
FIG. 2 is a block diagram of a global feature extraction module;
FIG. 3 is a diagram of a detail feature extraction module;
FIG. 4 is a schematic view of different fog levels;
FIG. 5 is a graph comparing the performance of different network training processes; (a) training a loss value for the model; (b) and training the accuracy of the model.
Detailed Description
The present invention will be described in further detail with reference to examples.
It will be appreciated by those skilled in the art that the following examples are illustrative of the invention only and should not be taken as limiting the scope of the invention. The examples do not specify particular techniques or conditions, and are performed according to the techniques or conditions described in the literature in the art or according to the product specifications. The materials or equipment used are not indicated by manufacturers, and all are conventional products available by purchase.
The invention provides a brand-new fog scene image recognition network based on multi-feature fusion, namely EnvNet (shown in figure 1), which mainly comprises three parts: (1) a global image feature extraction module; (2) a detail feature extraction module; (3) the feature fusion and recognition module is described in detail below.
(1) Global feature extraction module
The fog scene image is an outdoor scene image and consists of a background static target and a foreground dynamic target, and certain interference is brought to the identification of a scene image due to the change of the foreground dynamic target in practice; in addition, dynamic factors such as illumination and weather in outdoor scenes can also cause adverse effects on image recognition, and because of interference of the adverse factors, higher robustness requirements are put on recognition algorithms of scene images. In addition, as the characteristics of the fog scene image are analyzed, the fog density describes the overall atmosphere of the current scene, but not the local characteristics, that is, the overall perception of the image and the judgment based on the global characteristics are mainly utilized when the human visually judges the fog density. Therefore, when fog scene image recognition is carried out, enough global image features with high-efficiency expression capability are obtained, and the method has important significance for improving the overall recognition accuracy, and the overall framework is as shown in fig. 2.
In order to solve the problem of image information amount reduction caused by image degradation, except for common RGB three channels, the input of a global feature extraction module is added with a dark channel image as input (as formula 1).
Figure BDA0002846587010000041
In the formula I dark (x) Calculating a value for a dark channel, wherein c is a color channel of three primary colors; i isc(y) is the c-channel pixel value of a pixel of the image at the y position; Ω (x) is a region with x pixels as the core. The prior theory for dark channels states that: i isdark→ 0; while the dark elements that cause dark channels are first from black areas in the image, such as shadows of various objects, vehicle tires, etc.; second, dark elements are self-colored objects such as trees, cars, pedestrians, and buildings.
In addition, as the number of network layers is increased, the network is more inclined to acquire global semantic information of an image, so that the sensitivity of the image to some dynamic factors (dynamic targets and the like) may be increased, and meanwhile, an overfitting problem is also easy to occur, in order to extract global image features with sufficient expressive power, the global image feature extraction module only comprises 5 convolution layers, 2 pooling layers and 1 full-connection layer (see fig. 2), wherein 5 × 5 and 3 × 3 hole convolution kernels are respectively adopted in conv1 and conv2, so that the receptive field of each pixel on a feature map is improved, the global image features of the pixel are quickly acquired in a shallow region, and meanwhile, the number of parameters is controlled, and the overfitting risk is reduced; meanwhile, in order to improve the convergence rate and avoid under-fitting caused by too simple model, a BN layer is adopted after each convolution layer and ERElu is selected and used as an activation function; and finally, outputting an n x 1-dimensional feature vector through the full connection layer, wherein the vector is the extracted global image feature.
(2) Detail feature extraction module
When performing density recognition classification on fog scene images, one important challenge is: when the fog concentration continuously rises, the difference of different fog concentration levels shown on the image is smaller and smaller, the reason is that the image is more and more seriously degraded along with the rise of the concentration of the object, a boundary effect appears when the degradation is continuously enhanced, the influence amplitude of the degradation continuously increased on the image imaging is gradually reduced, and the fog concentration of the image cannot be accurately judged only by means of global characteristics, so that the fog concentration identification is carried out by combining the detailed characteristics of the image in the EnvNet. However, since the size of the original scene image is too large, a downsampling operation is required before the CNN-based feature extraction is performed, which may result in a loss of partial detail information, and furthermore, if the full image is processed directly, it is difficult to acquire a feature having sufficient expressive power for the effective image detail since the size of the detail is too small compared to the full image. Therefore, in EnvNet, a separate detail feature extraction module is proposed to obtain detail features with identification power.
In the detail feature extraction module, we mainly need to solve two problems: 1) how to locate the required detail area; 2) how to obtain the distinguishing features from the detail area. Aiming at the first problem, the invention provides a self-adaptive slicing method which is mainly based on the following assumptions: the richer the edge is, the more drastic the gradient change of the image is, the higher the entropy value of the image is, so that the larger the information content contained in the image is, firstly, randomly generating 10 non-overlapping image slices on the original image, then, processing each slice by using a Canny operator to obtain the image edge of each slice, counting the number of edge pixels of each slice, and selecting the slice with the largest number of pixels as a target slice. Aiming at the second problem, the invention still adopts a feature extraction module based on CNN, and since the detailed slice of the image is generated in the previous step, the feature extraction module has the main task of acquiring the most expressive features of the slice as much as possible, so that the feature extraction network here can adopt a deeper network structure, thereby improving the description capability of the feature extraction network on the image slice, and simultaneously considering the overfitting risk of the network, the EnvNet adopts a network structure similar to VGG16 (see fig. 3), wherein 8 convolutional layers, 4 pooling layers and 1 full-connection layer are included, and simultaneously for improving the convergence speed and ensuring smooth fitting, a BN layer is introduced after each convolution operation, and eRelu is adopted as an activation function. And finally, generating an N x 1-dimensional feature vector through a full connection layer, wherein the vector is the extracted detailed image feature.
(3) Feature fusion
Different features have different attributes, and the original image is also expressed from different angles, so how to effectively combine different features is an important ring for improving the overall recognition efficiency. In envNet, feature fusion refers to a process of organically combining global features and local features, and aims to provide multi-view feature information for final image classification. The method of cascade fusion (concat) was used in EnvNet: the global features and the detail features are stacked at the channel level, generating new features for 2 channels, as shown in equation (2).
Figure BDA0002846587010000061
Wherein the content of the first and second substances,
Figure BDA0002846587010000062
representing the fused feature vector, (9, j,2d) representing the size of the fused vector;
Figure BDA0002846587010000063
representing a global feature vector, and (i, j) representing a vectorThe amount is large;
Figure BDA0002846587010000064
the detail feature vector is represented, and (i, j) represents the vector size.
The invention adopts the same training set and test set, and trains three models simultaneously: (1) the global feature extraction module inputs feature data of four channels of RGB and dark channels; (2) a detail feature extraction module that inputs image data after being sliced; (3) and the input of the global image feature extraction module is a global feature and a local feature vector. Training parameters: (1) learning rate is 0.001.Dacay is a 0.0005 drop every 7 epochs; adopting an SGD optimization method, wherein the impulse is 0.9; the loss function is Cross Encopy; the data preprocessing method comprises the following steps: a random rotation of the image of-20 to 20 degrees was performed. The use mode of the finally obtained neural network model is to input an original image, and then output, namely the fog concentration level of the original image can be obtained.
Examples of the applications
The data adopted in the experiment are real pictures shot by a road bayonet camera, the size of the images is 1920 x 1080, the time span is one year, and the images contain different weather and different seasons. The data marking is carried out by manually checking the fog concentration by professional personnel based on visibility data of the traditional weather instrument, and specific rules are shown in table 1. The training set in the whole data set is 5000 fog scene pictures with different fog concentrations, wherein the training set comprises 5 classes of 1000 pictures, the testing set comprises 1500 fog scene pictures with different object concentrations, the testing set comprises 5 classes of 250 pictures, and fig. 4 shows.
TABLE 1 fog Scale division
Figure BDA0002846587010000065
Figure BDA0002846587010000071
The EnvNet and the classical image classification model provided by the invention are compared to verify the superiority of the EnvNet on the performance of other models, and the network models participating in comparison comprise an inclusion V4, an inclusion _ respet V2, an EfficientNet _ B0, an EfficientNet _ B1, an EfficientNet _ B2 and an EfficientNet _ B7. The input image size of each different model is the standard size of each network, and is obtained from the original image resize, and the main training parameters include: (1) learning rate is 0.001.Dacay is a 0.0005 drop every 7 epochs; adopting an SGD optimization method, wherein the impulse is 0.9; the loss function is Cross Encopy; the data preprocessing method comprises the following steps: a random rotation of-20 to 20 degrees was performed. The results of the accuracy comparison of the fog level classification model are shown in table 2, and the loss variation in the training process is shown in fig. 5.
According to the experimental results, the following results are obtained: (1) from the perspective of the training process, as shown in fig. 5, the convergence rate of EnvNet and the loss value of final convergence are the most significant, which indicates that EnvNet proposed by the present invention has good model fitting capability. (2) From the aspect of recognition performance, as shown in table 2, EnvNet has the optimal classification performance on the problem of fog scene image recognition, except that the difference between inclusion-res V2 and EnvNet is small (EnvNet is 0.9223, inclusion-res V2 is 0.9196), the rest network structures all have at least 0.02 difference, and EnvNet also has the better performance on parameter performance, and the parameter scale of the three networks with the optimal performance (inclusion _ res _ V2, efficiency ientnet-B7, EnvNet) is the smallest. Therefore, the EnvNet provided by the invention has the best comprehensive performance when solving the scene image recognition problem.
TABLE 2 comparison of fog level identification performance across different networks
Figure BDA0002846587010000072
Figure BDA0002846587010000081
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (5)

1. A fog concentration grade identification method based on video image multi-feature fusion is characterized by comprising the following steps:
randomly generating a plurality of non-overlapping image slices on an original image, processing each slice by using a Canny operator to obtain the image edge of each slice, counting the number of edge pixels of each slice, and selecting the slice with the largest number of pixels as a target slice;
step (2), a detail feature extraction module is adopted to extract detail image features of the target slice;
the detail feature extraction module comprises 8 convolution layers, 4 pooling layers and 1 full-connection layer; introducing a BN layer after each convolution layer, and adopting eRelu as an activation function; setting a pooling layer after every two continuous convolution operations; finally, generating an N x 1-dimensional feature vector through a full connection layer, wherein the vector is the extracted detailed image feature;
step (3), a global image feature extraction module is adopted to extract global image features of the original image;
the global image feature extraction module comprises 5 convolutional layers, 2 pooling layers and 1 full-connection layer; introducing a BN layer after each convolution layer, and adopting eRelu as an activation function; setting a pooling layer after the third convolution operation and the last convolution operation respectively; finally, outputting an n x 1-dimensional feature vector through a full connection layer, wherein the vector is the extracted global image feature; the input of the global feature extraction module comprises RGB three-channel input and dark channel input;
step (4), fusing and identifying the detail image features and the global image features obtained in the step (2) by using a feature fusion identification module to obtain the fog concentration grade of the original image; the specific fusion method comprises the following steps: a cascade fusion method is used for stacking the global features and the detail image features at a channel level to generate new features of 2 channels.
2. The fog density level identification method based on video image multi-feature fusion as claimed in claim 1, wherein in step (1), 10 non-overlapping image slices are randomly generated on the original image.
3. The fog concentration level identification method based on video image multi-feature fusion is characterized in that in the step (3), a 5 x 5 hole convolution kernel is adopted in the first convolution layer; the second convolutional layer used a 3 x 3 void convolutional kernel.
4. The fog concentration level recognition method based on video image multi-feature fusion as claimed in claim 1, characterized in that the same training set and test set are used, and three models are trained simultaneously: (1) the global feature extraction module inputs feature data of four channels of RGB and dark channels; (2) a detail feature extraction module that inputs image data after being sliced; (3) and the input of the global image feature extraction module is a global feature and a local feature vector.
5. The fog concentration level recognition method based on video image multi-feature fusion as claimed in claim 4, wherein the training parameters are as follows: (1) learning rate is 0.001.Dacay is a 0.0005 drop every 7 epochs; adopting an SGD optimization method, wherein the impulse is 0.9; the loss function is Cross Encopy; the data preprocessing method comprises the following steps: a random rotation of the image of-20 to 20 degrees was performed.
CN202011511720.6A 2020-12-18 2020-12-18 Fog concentration grade identification method based on video image multi-feature fusion Active CN112686105B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011511720.6A CN112686105B (en) 2020-12-18 2020-12-18 Fog concentration grade identification method based on video image multi-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011511720.6A CN112686105B (en) 2020-12-18 2020-12-18 Fog concentration grade identification method based on video image multi-feature fusion

Publications (2)

Publication Number Publication Date
CN112686105A CN112686105A (en) 2021-04-20
CN112686105B true CN112686105B (en) 2021-11-02

Family

ID=75450277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011511720.6A Active CN112686105B (en) 2020-12-18 2020-12-18 Fog concentration grade identification method based on video image multi-feature fusion

Country Status (1)

Country Link
CN (1) CN112686105B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114973110B (en) * 2022-07-27 2022-11-01 四川九通智路科技有限公司 On-line monitoring method and system for highway weather

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101957309A (en) * 2010-08-17 2011-01-26 招商局重庆交通科研设计院有限公司 All-weather video measurement method for visibility
CN104794486A (en) * 2015-04-10 2015-07-22 电子科技大学 Video smoke detecting method based on multi-feature fusion
CN105046218A (en) * 2015-07-09 2015-11-11 华南理工大学 Multi-feature traffic video smoke detection method based on serial parallel processing
CN105957040A (en) * 2016-05-19 2016-09-21 湖南源信光电科技有限公司 Rapid defog algorithm based on image fusion
CN107203981A (en) * 2017-06-16 2017-09-26 南京信息职业技术学院 A kind of image defogging method based on fog concentration feature
CN109961070A (en) * 2019-03-22 2019-07-02 国网河北省电力有限公司电力科学研究院 The method of mist body concentration is distinguished in a kind of power transmission line intelligent image monitoring
CN110544213A (en) * 2019-08-06 2019-12-06 天津大学 Image defogging method based on global and local feature fusion
CN110705619A (en) * 2019-09-25 2020-01-17 南方电网科学研究院有限责任公司 Fog concentration grade judging method and device
CN111915530A (en) * 2020-08-06 2020-11-10 温州大学 End-to-end-based haze concentration self-adaptive neural network image defogging method

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ITTO20030770A1 (en) * 2003-10-02 2005-04-03 Fiat Ricerche LONG-DETECTION DETECTOR LONG ONE
KR100708478B1 (en) * 2004-09-24 2007-04-18 삼성전자주식회사 Toner composition
US9710715B2 (en) * 2014-12-26 2017-07-18 Ricoh Company, Ltd. Image processing system, image processing device, and image processing method
CN107424159B (en) * 2017-07-28 2020-02-07 西安电子科技大学 Image semantic segmentation method based on super-pixel edge and full convolution network
JP7000106B2 (en) * 2017-10-13 2022-01-19 キヤノン株式会社 Developing equipment, process cartridges and image forming equipment
CN114942454A (en) * 2019-03-08 2022-08-26 欧司朗股份有限公司 Optical package for a LIDAR sensor system and LIDAR sensor system
US11301974B2 (en) * 2019-05-27 2022-04-12 Canon Kabushiki Kaisha Image processing apparatus, image processing method, image capturing apparatus, and storage medium
KR102115928B1 (en) * 2019-10-31 2020-05-27 엘아이지넥스원 주식회사 Apparatus and Method for Eliminating Haze using Stereo Matching Method and Deep Learning Algorithm
CN111275168A (en) * 2020-01-17 2020-06-12 南京信息工程大学 Air quality prediction method of bidirectional gating circulation unit based on convolution full connection
CN111583136B (en) * 2020-04-25 2023-05-23 华南理工大学 Method for simultaneously positioning and mapping autonomous mobile platform in rescue scene
CN111738064B (en) * 2020-05-11 2022-08-05 南京邮电大学 Haze concentration identification method for haze image
CN111738314B (en) * 2020-06-09 2021-11-02 南通大学 Deep learning method of multi-modal image visibility detection model based on shallow fusion
CN111783732A (en) * 2020-07-17 2020-10-16 上海商汤智能科技有限公司 Group mist identification method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101957309A (en) * 2010-08-17 2011-01-26 招商局重庆交通科研设计院有限公司 All-weather video measurement method for visibility
CN104794486A (en) * 2015-04-10 2015-07-22 电子科技大学 Video smoke detecting method based on multi-feature fusion
CN105046218A (en) * 2015-07-09 2015-11-11 华南理工大学 Multi-feature traffic video smoke detection method based on serial parallel processing
CN105957040A (en) * 2016-05-19 2016-09-21 湖南源信光电科技有限公司 Rapid defog algorithm based on image fusion
CN107203981A (en) * 2017-06-16 2017-09-26 南京信息职业技术学院 A kind of image defogging method based on fog concentration feature
CN109961070A (en) * 2019-03-22 2019-07-02 国网河北省电力有限公司电力科学研究院 The method of mist body concentration is distinguished in a kind of power transmission line intelligent image monitoring
CN110544213A (en) * 2019-08-06 2019-12-06 天津大学 Image defogging method based on global and local feature fusion
CN110705619A (en) * 2019-09-25 2020-01-17 南方电网科学研究院有限责任公司 Fog concentration grade judging method and device
CN111915530A (en) * 2020-08-06 2020-11-10 温州大学 End-to-end-based haze concentration self-adaptive neural network image defogging method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"Dehaze Model to Improve Object Visibility Under Atmospheric Degradation ";T. R. Vijaya Lakshmi 等;《2020 3rd International Conference on Intelligent Sustainable Systems (ICISS)》;20201205;1429-1433 *
"Learning environmental sounds with end-to-end convolutional neural network";Tokozume, Yuji等;《2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)》;20170619;2721–2725 *
"基于区域雾浓度的自适应调参图像去雾方法研究";满美麟;《中国优秀硕士学位论文全文数据库 信息科技辑》;20190915(第9期) *
"基于雾浓度检测和简化大气散射模型的图像去雾算法";吴玉莲;《 国外电子测量技术》;20180715;第37卷(第7期) *
"基于颜色、形状和纹理的多特征融合图像检索";李薇 等;《航空计算技术》;20131125 *

Also Published As

Publication number Publication date
CN112686105A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
CN111310862B (en) Image enhancement-based deep neural network license plate positioning method in complex environment
CN109460753B (en) Method for detecting floating object on water
CN108830171B (en) Intelligent logistics warehouse guide line visual detection method based on deep learning
CN108509954A (en) A kind of more car plate dynamic identifying methods of real-time traffic scene
CN110796009A (en) Method and system for detecting marine vessel based on multi-scale convolution neural network model
CN110310241B (en) Method for defogging traffic image with large air-light value by fusing depth region segmentation
CN107085696A (en) A kind of vehicle location and type identifier method based on bayonet socket image
CN110263706A (en) A kind of haze weather Vehicular video Detection dynamic target and know method for distinguishing
CN109840483B (en) Landslide crack detection and identification method and device
CN109255350A (en) A kind of new energy detection method of license plate based on video monitoring
CN103035013A (en) Accurate moving shadow detection method based on multi-feature fusion
CN114663346A (en) Strip steel surface defect detection method based on improved YOLOv5 network
CN103295013A (en) Pared area based single-image shadow detection method
CN102542293A (en) Class-I extraction and classification method aiming at high-resolution SAR (Synthetic Aperture Radar) image scene interpretation
CN104657980A (en) Improved multi-channel image partitioning algorithm based on Meanshift
CN112818775B (en) Forest road rapid identification method and system based on regional boundary pixel exchange
CN110827312A (en) Learning method based on cooperative visual attention neural network
Zhang et al. Application research of YOLO v2 combined with color identification
CN106709412A (en) Traffic sign detection method and apparatus
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
CN112418028A (en) Satellite image ship identification and segmentation method based on deep learning
CN114972177A (en) Road disease identification management method and device and intelligent terminal
CN112686105B (en) Fog concentration grade identification method based on video image multi-feature fusion
Yang et al. PDNet: Improved YOLOv5 nondeformable disease detection network for asphalt pavement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant