CN116052135B - Foggy day traffic sign recognition method based on texture features - Google Patents

Foggy day traffic sign recognition method based on texture features Download PDF

Info

Publication number
CN116052135B
CN116052135B CN202310340645.9A CN202310340645A CN116052135B CN 116052135 B CN116052135 B CN 116052135B CN 202310340645 A CN202310340645 A CN 202310340645A CN 116052135 B CN116052135 B CN 116052135B
Authority
CN
China
Prior art keywords
texture
foggy
traffic sign
features
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310340645.9A
Other languages
Chinese (zh)
Other versions
CN116052135A (en
Inventor
刘兆惠
宋润泽
贾鑫
王超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN202310340645.9A priority Critical patent/CN116052135B/en
Publication of CN116052135A publication Critical patent/CN116052135A/en
Application granted granted Critical
Publication of CN116052135B publication Critical patent/CN116052135B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a foggy day traffic sign recognition method based on texture features, which belongs to the technical field of traffic sign recognition and comprises the following steps: constructing a dataset comprising: a clear-sky traffic sign image and a corresponding local directional binary pattern texture feature of a fusion edge detection operator; constructing a texture and convolution feature fusion training model; training the texture and convolution feature fusion training model by utilizing the data set; and identifying foggy-day traffic sign images based on the trained texture and convolution feature fusion training model. The texture and convolution feature fusion training model provided by the invention has far better recognition effect on foggy weather traffic signs than the recognition effect based on the foggy original image, foggy LBP texture and foggy LDP texture training model, and realizes the task of directly recognizing foggy weather traffic signs by training the recognition model through the foggy texture features.

Description

Foggy day traffic sign recognition method based on texture features
Technical Field
The invention belongs to the technical field of traffic sign recognition, and particularly relates to a foggy day traffic sign recognition method based on texture features.
Background
Traffic signs transmit various guiding, limiting, warning and indicating information in the driving process in the form of characters and symbols, so that traffic sign identification becomes a particularly important content in the automatic driving environment perception research. The traffic sign recognition method based on the visual sensor is a mainstream method in the field due to the fact that the traffic sign recognition method based on the visual sensor is efficient and accurate, and a plurality of remarkable achievements are also presented. However, the research at the present stage is mainly developed under a sunny environment, and a robust method for identifying traffic signs aiming at bad weather conditions is rarely available. The color distribution of the traffic sign image acquired by the visual sensor is offset and distorted under the influence of bad weather, especially in foggy weather, so that the complexity of detection tasks is increased, and the problems become key bottlenecks for research in the field. Therefore, the research on the traffic sign recognition efficiency improvement method of the automatic driving vehicle-mounted visual sensor in the foggy environment has important theoretical significance.
As described above, there are many advantages in traffic sign detection using a visual sensor in a sunny environment, but the highly visibility-dependent nature of the visual sensor limits its application in engineering. The environment light is seriously scattered due to the fact that a large amount of turbulence medium exists in the air in the foggy day, the quality of an image acquired by the vehicle-mounted imaging equipment is obviously reduced besides the attenuation of the detection distance, and the recognition of traffic signs is greatly challenged by the coverage of detailed information of a detection target, the disappearance of obvious characteristics, the reduction of information authentication degree and the like. Therefore, the method for accurately, efficiently and real-time identifying the traffic sign by the visual sensor in the foggy environment is researched, and the method has important practical application value for guiding all-weather and all-season safe running of the automatic driving vehicle.
The automatic driving is realized by depending on the identification of traffic signs, and the influence of bad weather is difficult to get rid of. Traffic signs are the most direct way of reacting to road information, and have numerous positive effects on ensuring driving safety and traffic flow stability, and any misjudgment of information caused by bad weather influence may cause immeasurable loss of personal and property safety. Therefore, the research on the traffic sign recognition efficiency improvement method in the foggy environment is an important link in the automatic driving process.
At present, related expert scholars at home and abroad have conducted related researches on image defogging, image texture feature extraction (research is not conducted by combining with subsequent advanced visual tasks such as target detection of traffic sign recognition and the like) and traffic sign recognition algorithms under good weather conditions. The feature extraction of target detection is only aimed at sunny images, and the influence of foggy weather environment on the target features cannot be considered. Therefore, the traffic sign detection performance in the foggy day image still cannot be improved significantly at the present stage. The texture feature extraction is a key link for successfully describing, detecting and classifying the image texture, and because the texture is different from the image features such as gray level, color and the like, a uniform definition is difficult to be given to the texture, and a practical and effective texture feature extraction method is still lacking at present. Especially, aiming at the characteristics of traffic sign images in foggy weather, what texture features are extracted or fused, the sensitivity degree to the foggy weather is reduced, the recognition accuracy of traffic signs in the foggy weather is improved, and further intensive research is needed. According to the invention, through researching the gray distribution rule of foggy-day traffic sign images, the influence degree of LBP texture features and GLCM texture features on foggy days is systematically analyzed, and the similarity of gray statistical features of foggy-day texture feature-free images and foggy-day texture feature images is found to be extremely high, so that a method idea of directly identifying traffic signs of foggy-day images by extracting foggy-day image texture features and inputting the foggy-day texture features into a convolutional neural network is proposed from a brand-new view angle.
Disclosure of Invention
In order to solve the technical problems, the invention provides a foggy day traffic sign recognition method based on texture features, which realizes the task of directly recognizing foggy day traffic signs by training a recognition model through the non-foggy texture features.
In order to achieve the above purpose, the invention provides a foggy day traffic sign recognition method based on texture features, comprising the following steps:
constructing a dataset comprising: a clear-sky traffic sign image and a corresponding local directional binary pattern texture feature of a fusion edge detection operator;
constructing a texture and convolution feature fusion training model;
training the texture and convolution feature fusion training model by utilizing the data set;
and identifying foggy-day traffic sign images based on the trained texture and convolution feature fusion training model.
Optionally, obtaining the local orientation binary pattern texture feature of the fused edge detection operator includes:
extracting LBP texture features and LDP texture features of the clear-sky traffic sign image;
and fusing the LBP texture features and the LDP texture features to obtain the local orientation binary pattern texture features.
Optionally, fusing the LBP texture feature and LDP texture feature includes:
and (3) adopting a Kirsch operator, and taking the Kirsch operation output value as a binarization weight corresponding to each pixel neighborhood, thereby realizing nonlinear fusion of the LBP texture features and the LDP texture features.
Optionally, the method for fusing the LBP texture feature and the LDP texture feature is as follows:
Figure SMS_1
Figure SMS_2
wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_3
a local orientation binary pattern for the fused edge detection operator;kis the firstkA large edge response value; />
Figure SMS_4
Representing the comparison function, takes the value 0 or 1,xis an independent variable; />
Figure SMS_5
The gray value of the pixel which does not comprise the central pixel in the 3 multiplied by 3 neighbor of a certain pixel A; />
Figure SMS_6
The gray value of a certain pixel point A; />
Figure SMS_7
Is a weight index;iin order to take the value range,i=0, 1, …,7; c represents a certain pixel point a.
Optionally, the texture and convolution feature fusion training model is constructed based on a YOLOv5s network.
Optionally, training the texture and convolution feature fusion training model includes:
extracting convolution characteristics of the clear-sky traffic sign image by using the convolution layer in the YOLOv5 s;
fusing the local directional binary pattern texture features of the fused edge detection operator and the convolution features by utilizing a feature fusion structure in the YOLOv5s network to obtain fused features;
and detecting the training of the texture and convolution feature fusion training model based on the fused features and adjusting model weights.
Optionally, fusing the locally oriented binary pattern texture feature of the fused edge detection operator and the convolution feature includes:
and adding convolution characteristics and texture characteristic information of each target in a sunny environment in the characteristic map through add, and splicing the target characteristics in different environments through concat to realize fusion of the local directional binary pattern texture characteristics of the fusion edge detection operator and the convolution characteristics.
Optionally, identifying the foggy day traffic sign image includes:
extracting local orientation binary pattern texture features of the fusion edge detection operator of the foggy-day traffic sign image;
and inputting the extracted texture image into a trained texture and convolution feature fusion training model, and carrying out foggy day traffic sign recognition.
Compared with the prior art, the invention has the following advantages and technical effects:
according to the invention, LBP and LDP are non-linearly fused, a local directional binary pattern texture feature of a fused edge detection operator is provided, the advantages of the LBP and LDP are complemented by a K-LDBP operator, so that the method has stronger anti-fog property, reduces the calculated amount of the texture feature, accelerates the extraction speed, and can better extract the edge feature information of traffic signs in foggy weather environments;
the recognition model for training the CNN fusion haze-free K-LDBP texture features provided by the invention has a far better recognition effect on foggy weather traffic signs than recognition effects based on haze-free artwork, haze-free LBP textures and haze-free LDP texture training models. The task of training the identification model through the haze-free texture features and directly identifying traffic signs in foggy days is realized;
as the haze-free K-LDBP texture feature image is not obvious compared with the original image color feature, the overall feature is single, so that the texture image occupies less memory, has higher detection timeliness on the average detection time of a single image, and optimizes the model identification efficiency. Especially when the real foggy-day traffic sign image is identified, the identification accuracy of the identification model based on foggy-free K-LDBP texture training is improved to a greater extent compared with the identification model based on image defogging. The traffic sign recognition method based on the fusion texture features has better detection performance because the real foggy environment textures have a certain arrangement rule.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, illustrate and explain the application and are not to be construed as limiting the application. In the drawings:
fig. 1 is a schematic flow chart of a foggy day traffic sign recognition method based on texture features according to an embodiment of the invention.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
The invention provides a foggy day traffic sign recognition method based on texture features, which comprises the following steps:
constructing a dataset comprising: a clear-sky traffic sign image and a corresponding local directional binary pattern texture feature of a fusion edge detection operator;
constructing a texture and convolution feature fusion training model;
training a texture and convolution characteristic fusion training model by utilizing a data set;
and based on the trained texture and convolution characteristic fusion training model, identifying the foggy day traffic sign image.
Further, obtaining the local orientation binary pattern texture feature of the fused edge detection operator includes:
extracting LBP texture features and LDP texture features of the clear-sky traffic sign image;
and fusing the LBP texture features and the LDP texture features to obtain local orientation binary pattern texture features.
Further, fusing the LBP texture features and the LDP texture features includes:
and by adopting a Kirsch operator and taking the Kirsch operation output value as a binarization weight corresponding to each pixel neighborhood, the nonlinear fusion of the LBP texture features and the LDP texture features is realized.
Further, the texture and convolution feature fusion training model comprises: a YOLOv5s network trained by using the clear-sky traffic sign image and the corresponding K-LDBP texture feature image;
further, training the texture and convolution feature fusion training model includes:
extracting convolution characteristics of the clear-sky traffic sign image by using a convolution layer in YOLOv5 s;
fusing the local directional binary pattern texture features and the convolution features of the fused edge detection operator by utilizing a feature fusion structure in the YOLOv5s network to obtain fused features;
based on the fused features, training of the texture and convolution feature fusion training model is detected and model weights are adjusted.
Further, fusing the local orientation binary pattern texture feature and the convolution feature of the fused edge detection operator comprises:
the convolution characteristics and texture characteristic information of each target in a sunny environment in the characteristic diagram are added through add, and then the target characteristics in different environments are spliced through concat, so that fusion of the local orientation binary pattern texture characteristics and the convolution characteristics is realized.
Further, identifying the foggy day traffic sign image includes:
extracting local directional binary pattern texture features of a fusion edge detection operator of the foggy day traffic sign image;
and inputting the extracted texture image into a trained texture and convolution feature fusion training model, and carrying out foggy day traffic sign recognition.
As shown in fig. 1, the method for identifying a traffic sign in foggy days based on texture features according to the embodiment specifically includes:
step 1: analyzing the influence of foggy days on the texture characteristics of the traffic sign image;
texture is a fundamental property of an object and is insensitive to changes in illumination, thus taking into account that texture may be less affected by bad weather. Based on the thought, a statistical histogram is drawn from an image gray scale normalization angle, gray scale distribution rules of foggy day traffic sign images are studied, and the influence degree of foggy days on image texture features is analyzed. Randomly selecting a picture from the CCTSDB data set, and generating a synthetic foggy day image and a corresponding LBP texture image. The contrast image gray scale normalization statistical histogram can find that the gray scale statistical characteristics of the foggy image have quite obvious distribution difference compared with the foggy image; and the gray scale statistical feature similarity of the haze-free texture feature image and the foggy day texture feature image is extremely high. Further, the degree of influence of the foggy environment on the LBP (Local Binary Pattern local binary pattern) texture features and the GLCM (Grey Level Co-occurrence Matrix gray Level Co-occurrence matrix) texture features is compared and analyzed, so that the degree of influence of the foggy environment on the LBP texture features is smaller compared with the degree of influence of the GLCM texture features, and the GLCM has stronger noise resistance. The method provides a brand new idea for training the identification model based on the haze-free traffic sign and the texture characteristic data set thereof, and is directly used for identifying the traffic sign in a haze environment.
Step 2: extracting texture features of a local orientation binary pattern (K-LDBP) fused with an edge detection operator;
since the LBP texture features vary significantly in the image region based on the intensity differences, the actual intensity of the location where they are located cannot be calculated and the orientation information is too sensitive to variation. Considering that traffic sign recognition is an important component of automatic driving environment perception, the real-time requirement is high in the actual application scene, the extraction speed of the texture features of the traffic sign is required to be optimized, and particularly, the recognition accuracy of the traffic sign image in the real foggy environment is required to be improved, so that the extraction method of the texture features is required to be further improved according to the characteristics of the traffic sign.
For this purpose, a local orientation binary pattern (Kirsch Local Directional Binary Pattern-K-LDBP) texture feature of a fusion edge detection operator is proposed for nonlinear fusion of LBP and LDP (Local Directional Pattern local orientation pattern).
The problem of huge LDP calculation amount is solved through a traditional LBP binarization calculation mode, meanwhile, the self advantages of the LDP are reserved, a Kirsch operator is still adopted, and the Kirsch operation output value is used as a binarization weight corresponding to each pixel neighborhood, so that nonlinear fusion of the LBP and the LDP is realized, the edge characteristic information of traffic signs is better extracted, and the specific fusion mode is shown in formulas (1) and (2).
Figure SMS_8
(1)
Figure SMS_9
(2)
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure SMS_10
a local orientation binary pattern for the fused edge detection operator;kis the firstkA large edge response value; />
Figure SMS_11
Representing the comparison function, takes the value 0 or 1,xis an independent variable; />
Figure SMS_12
The gray value of the pixel which does not comprise the central pixel in the 3 multiplied by 3 neighbor of a certain pixel A; />
Figure SMS_13
The gray value of a certain pixel point A; />
Figure SMS_14
Is a weight index;iin order to take the value range,i=0, 1, …,7; c represents a certain pixel point a.
8 Kirsch orientation templates are respectively oriented toP i I=0, 1, …,7, respectively, thereby giving weight measures of the gray value variation of adjacent pixels in different directions.
M in output according to 8 Kirsch templates i Each pixel is assigned a weight index w i (a number between 0 and 7) to achieve nonlinear fusion of LBP and LDP.
The basic principle of the K-LDBP texture feature is that the output of the Kirsch operator template in a particular direction provides an indication of the probability of an edge occurring in that direction. Since LBP represents the gray value variation of adjacent pixels in the same direction, kirsch output values are used to assign decimal to binary weights.
The K-LDBP operator improves the experience assignment method of the parameter K in the traditional LDP, simultaneously, the rotation invariance is encoded into the main formula, the calculated amount of texture features is reduced, the extraction speed is increased, and the edge feature information of the traffic sign in the foggy weather environment can be better extracted. In addition, the K-LDBP operator complements the advantages of the LBP algorithm and the LDP algorithm, has stronger anti-fog property, and is more obvious in extraction of traffic sign textures in clear-weather and foggy-weather images. Therefore, texture features extracted by the K-LDBP operator are selected as features for fusion training with convolution features, and a texture and convolution feature fusion training model is constructed to improve the recognition performance of foggy weather traffic signs.
Step 3: construction of texture and convolution feature fusion training model
In order to fully utilize the K-LDBP texture characteristics with strong fog resistance, the traffic sign in the foggy environment is identified by constructing the texture and convolution characteristic fusion training model. The construction of the texture and convolution feature fusion model comprises three parts of training data set construction, traffic sign texture feature and convolution feature extraction and texture and convolution feature fusion.
Firstly, acquiring a sunny image through a Chinese traffic sign detection data set (CSUST Chinese Traffic Sign Detection Benchmark, CCTSDB), extracting the texture features of the sunny image by using a K-LDBP operator, and selecting a texture feature map with rich feature points and the sunny image corresponding to the texture feature map as a training set. The training set, the verification set and the test set are divided according to the proportion of 8:1:1. At present, three major categories of indication marks, prohibition marks and warning marks are marked on the CCTSDB data set classification label, so that the identification effect of the proposed method can be more intuitively evaluated by selecting the data set. Then, the convolution layer in YOLOv5s is used to extract the convolution features in the sunny day image. Finally, the effective fusion of texture features and convolution features is realized by utilizing a feature fusion structure in YOLOv5 s;
specifically comprises two implementation approaches of add and concat. Wherein add is embodied in the whole feature extraction process and is present in the residual unit of the residual block. And superposing the corresponding feature graphs in an add mode, completing the compounding of the feature graphs, and then performing the next-layer operation. On the premise of not changing the dimension, the information quantity of the feature image is increased, namely the information under each feature is increased.
And concat belongs to the splicing of feature graphs, exists in a CSP structure, and is spliced with the up-sampling of a rear convolution layer for some output feature graph layers in a backstone, so that the dimension of the feature graph is expanded, and feature fusion is realized. Unlike add, the concat operation enables information fusion of multi-scale features and residual convolution layers, adding descriptive features to some sort of object in the image, rather than features of a single object.
When the same batch of training images contain both a sunny day image and a texture feature image corresponding to the sunny day image, the convolution features and texture feature information of each target in the sunny day environment in the feature image are added through add, and then the target features in different environments are spliced through concat, so that the image feature fusion is realized. The fused features are used for detecting the training of the network and reasonably adjusting the model weight, so that the foggy-day traffic sign recognition performance is improved.
Step 4: foggy day traffic sign identification
Firstly, a required synthetic foggy environment test set and a real foggy environment traffic sign image test data set are constructed. The synthetic foggy day data set is obtained by extracting a sunny day image from a CCTSDB data set and adding artificial noise with different degrees according to an atmospheric scattering model based on a Koschmieder law. Because no real foggy-day traffic sign public data set exists at present, 300 real foggy-day traffic sign images are collected to manufacture a test set, the image acquisition time is mainly concentrated in 6 months in 2020-9 months in 2021, the place is Qingdao city, the acquisition scene is mainly an urban road in foggy-day weather, and the acquisition mode is mainly a vehicle-mounted camera. And acquiring the video with 1080P resolution, and labeling a detection target by using labelImg. To increase the data universality, foggy day traffic sign images are selected from other public data sets (such as BDD100K, oxford RobotCar Dataset and ApolloScape) to be supplemented.
And when the foggy day traffic sign is identified, the K-LDBP operator is used for extracting textures of the foggy day traffic sign image, and then the texture and convolution characteristic fusion training model is input for traffic sign identification.
Step 5: foggy day traffic sign recognition performance verification
And (3) comparing and verifying the synthetic foggy day and real foggy day traffic sign data set constructed by the invention. The robustness of the foggy environment traffic sign recognition model based on the CNN fusion K-LDBP texture features is verified by recognizing performance evaluation indexes Precision, recall and mAP (mean Average Precision) and comparing with the foggy environment traffic sign recognition model based on the LBP and LDP texture features. Further, compared with the foggy day traffic sign recognition method based on image defogging, the foggy day traffic sign recognition method based on image defogging carries out defogging treatment on foggy day images firstly, and traffic signs are recognized on defogging images by using a conventional training model, so that defogging time of a single image is long; compared with the original image, the haze-free K-LDBP texture feature image has the advantages that the color features are not obvious, the overall features are single, so that the texture image occupies less memory, the detection timeliness is higher in the average detection time of a single image, and the model recognition efficiency is optimized. Especially when the real foggy-day traffic sign image is identified, the identification accuracy of the identification model based on foggy-free K-LDBP texture training is improved to a greater extent compared with the identification model based on image defogging. The traffic sign recognition method based on the fusion texture features has better detection performance because the real foggy environment textures have a certain arrangement rule.
In the embodiment, considering that texture is a basic attribute of an object and illumination change is insensitive, in step 1, by researching the gray distribution rule of foggy day traffic sign images, the influence degree of LBP texture features and GLCM texture features on foggy days is systematically analyzed, the similarity of gray statistical features of foggy-free texture feature images and foggy day texture feature images is found to be extremely high, and a brand-new idea is provided for realizing traffic sign identification in an automatic driving foggy day environment only through an open-source foggy-free traffic sign data set aiming at the current situation of lacking an open-source foggy day traffic sign data set.
Compared with GLCM texture features, the LBP texture features obtained through analysis in the step 1 are less affected by foggy days and have stronger noise immunity. However, since the LBP texture features vary significantly in the image region based on the intensity difference, the actual intensity of the location where they are located cannot be calculated and the orientation information is too sensitive to variation. It is therefore necessary to further improve the extraction method of texture features for traffic sign features. For this purpose, in step 2, LBP and LDP are non-linearly fused, and a local orientation binary pattern texture feature of a fused edge detection operator is provided. The problem of huge LDP calculation amount is solved through a traditional LBP binarization calculation mode, meanwhile, the self advantages of the LDP are reserved, a Kirsch operator is still adopted, and the output value of the Kirsch operation is used as a binarization weight corresponding to each pixel neighborhood, so that the nonlinear fusion of the LBP and the LDP is realized. The K-LDBP operator complements the advantages of the LBP algorithm and the LDP algorithm, has stronger anti-fog property, reduces the calculated amount of texture features, accelerates the extraction speed, and can better extract the edge feature information of the traffic sign in a foggy environment.
In step 4, the image in the central traffic sign detection dataset (CSUST Chinese Traffic Sign Detection Benchmark, CCTSDB) based on Koschmieder's law (all of this dataset is a foggy image) is fogged, creating a synthetic foggy day traffic sign dataset; and acquiring foggy day traffic sign images through the vehicle-mounted camera to construct a real foggy day environment traffic sign data set. And training a traffic sign recognition model based on Convolutional Neural Network (CNN) fusion haze-free K-LDBP texture features by using YOLOv5 as a target detection network framework. Experiments prove that the recognition model trained by the CNN fusion haze-free K-LDBP texture features provided in the step 3 has a far better recognition effect on foggy traffic signs than recognition effects based on haze-free artwork, haze-free LBP textures and haze-free LDP texture training models. The task of training the identification model through the haze-free texture features and directly identifying traffic signs in foggy days is realized.
Further, the method is longitudinally compared with a foggy day traffic sign recognition method based on image defogging, and because the foggy day image is defogged firstly by the foggy day traffic sign recognition method based on image defogging, and then traffic signs are recognized by using a conventional training model for defogging images, the defogging time of a single image is longer; compared with the original image color features, the haze-free K-LDBP texture feature image is unobvious, and the overall features are single, so that the texture image occupies less memory, has higher detection timeliness on the average detection time of a single image, and optimizes the model identification efficiency. Especially when the real foggy-day traffic sign image is identified, the identification accuracy of the identification model based on foggy-free K-LDBP texture training is improved to a greater extent compared with the identification model based on image defogging. The traffic sign recognition method based on the fusion texture features has better detection performance because the real foggy environment textures have a certain arrangement rule.
According to the foggy day traffic sign recognition method based on the texture features, YOLOv5 is selected as a target detection network frame, a synthetic foggy day data set and a real foggy day data set are established to verify a foggy day traffic sign recognition result, CNN fusion of the texture features of the foggy image is achieved, and traffic signs are recognized directly from the foggy day image.
Aiming at the problem that the existing open source traffic sign data sets are collected in a foggy scene, the original features of traffic sign images are affected by foggy days, and the recognition model trained based on the foggy traffic sign data sets is applied to the problem of low recognition precision of foggy traffic signs, the characteristic of extremely high similarity of gray statistics features of foggy texture feature images and foggy texture feature images is utilized, the foggy texture features and convolution features are fused, and the noise resistance of the foggy traffic sign recognition model is improved.
The foggy day traffic sign recognition method based on the texture features is characterized in that LBP and LDP algorithms are in nonlinear fusion, the texture features are extracted by adopting a local directional binary pattern (K-LDBP) of a fusion edge detection operator, the empirical assignment mode of a parameter K in the traditional LDP algorithm is improved, meanwhile, rotation invariance is encoded into a main formula, the calculated amount of the texture features is reduced, the extraction speed of the texture features is accelerated, the detection speed can be greatly improved, and the edge feature information of the traffic sign can be better extracted, so that the real-time requirement of intelligent vehicle target detection is really met.
The foregoing is merely a preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily conceivable by those skilled in the art within the technical scope of the present application should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (4)

1. The foggy day traffic sign recognition method based on the texture features is characterized by comprising the following steps of:
constructing a dataset comprising: a clear-sky traffic sign image and a corresponding local directional binary pattern texture feature of a fusion edge detection operator;
constructing a texture and convolution feature fusion training model based on a YOLOv5s network;
training the texture and convolution feature fusion training model by utilizing the data set;
based on the trained texture and convolution characteristic fusion training model, identifying foggy-day traffic sign images;
the obtaining of the local orientation binary pattern texture feature of the fusion edge detection operator comprises the following steps:
extracting LBP texture features and LDP texture features of the clear-sky traffic sign image;
fusing the LBP texture features and the LDP texture features to obtain local orientation binary pattern texture features of the fused edge detection operator;
fusing the LBP texture features and LDP texture features includes:
by adopting a Kirsch operator, the Kirsch operation output value is used as a binarization weight corresponding to each pixel neighborhood, so that nonlinear fusion of the LBP texture features and the LDP texture features is realized;
the method for fusing the LBP texture features and the LDP texture features comprises the following steps:
Figure QLYQS_1
wherein (1)>
Figure QLYQS_2
A local orientation binary pattern for the fused edge detection operator;kis the firstkA large edge response value; />
Figure QLYQS_3
Representing the comparison function, takes the value 0 or 1,xis an independent variable; />
Figure QLYQS_4
The gray value of the pixel which does not comprise the central pixel in the 3 multiplied by 3 neighbor of a certain pixel A; />
Figure QLYQS_5
The gray value of a certain pixel point A; />
Figure QLYQS_6
Is a weight index;iin order to take the value range,i=0, 1, …,7; c represents a certain pixel point a.
2. The method of claim 1, wherein training the texture-to-convolution feature fusion training model comprises:
extracting convolution characteristics of the clear-sky traffic sign image by using the convolution layer in the YOLOv5 s;
fusing the local directional binary pattern texture features of the fused edge detection operator and the convolution features by utilizing a feature fusion structure in the YOLOv5s network to obtain fused features;
and detecting the training of the texture and convolution feature fusion training model based on the fused features and adjusting model weights.
3. The texture-based foggy day traffic sign recognition method of claim 2, wherein fusing the locally oriented binary pattern texture features of the fused edge detection operator and the convolution features comprises:
and adding convolution characteristics and texture characteristic information of each target in a sunny environment in the characteristic map through add, and splicing the target characteristics in different environments through concat to realize fusion of the local directional binary pattern texture characteristics of the fusion edge detection operator and the convolution characteristics.
4. The texture-based foggy day traffic sign recognition method according to claim 1, wherein recognizing the foggy day traffic sign image comprises:
extracting local orientation binary pattern texture features of the fusion edge detection operator of the foggy-day traffic sign image;
and inputting the extracted texture image into a trained texture and convolution feature fusion training model, and carrying out foggy day traffic sign recognition.
CN202310340645.9A 2023-04-03 2023-04-03 Foggy day traffic sign recognition method based on texture features Active CN116052135B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310340645.9A CN116052135B (en) 2023-04-03 2023-04-03 Foggy day traffic sign recognition method based on texture features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310340645.9A CN116052135B (en) 2023-04-03 2023-04-03 Foggy day traffic sign recognition method based on texture features

Publications (2)

Publication Number Publication Date
CN116052135A CN116052135A (en) 2023-05-02
CN116052135B true CN116052135B (en) 2023-07-11

Family

ID=86122130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310340645.9A Active CN116052135B (en) 2023-04-03 2023-04-03 Foggy day traffic sign recognition method based on texture features

Country Status (1)

Country Link
CN (1) CN116052135B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778529A (en) * 2016-11-25 2017-05-31 南京理工大学 A kind of face identification method based on improvement LDP
CN109543656A (en) * 2018-12-17 2019-03-29 南京邮电大学 A kind of face feature extraction method based on DCS-LDP
CN111178312A (en) * 2020-01-02 2020-05-19 西北工业大学 Face expression recognition method based on multi-task feature learning network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106960202B (en) * 2017-04-11 2020-05-19 湖南灵想科技股份有限公司 Smiling face identification method based on visible light and infrared image fusion
CN115393822A (en) * 2022-07-06 2022-11-25 山东科技大学 Method and equipment for detecting obstacle in driving in foggy weather
CN115731401A (en) * 2022-11-29 2023-03-03 江西财经大学 Image smoke fine detection method based on texture perception

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778529A (en) * 2016-11-25 2017-05-31 南京理工大学 A kind of face identification method based on improvement LDP
CN109543656A (en) * 2018-12-17 2019-03-29 南京邮电大学 A kind of face feature extraction method based on DCS-LDP
CN111178312A (en) * 2020-01-02 2020-05-19 西北工业大学 Face expression recognition method based on multi-task feature learning network

Also Published As

Publication number Publication date
CN116052135A (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN107729818B (en) Multi-feature fusion vehicle re-identification method based on deep learning
CN106599792B (en) Method for detecting hand driving violation behavior
CN109657632B (en) Lane line detection and identification method
CN110619279B (en) Road traffic sign instance segmentation method based on tracking
CN103049763B (en) Context-constraint-based target identification method
CN109255350B (en) New energy license plate detection method based on video monitoring
CN105930791A (en) Road traffic sign identification method with multiple-camera integration based on DS evidence theory
CN102708356A (en) Automatic license plate positioning and recognition method based on complex background
Hadjidemetriou et al. Vision-and entropy-based detection of distressed areas for integrated pavement condition assessment
CN114998852A (en) Intelligent detection method for road pavement diseases based on deep learning
CN106934374A (en) The recognition methods of traffic signboard and system in a kind of haze scene
Megalingam et al. Indian traffic sign detection and recognition using deep learning
CN104881661A (en) Vehicle detection method based on structure similarity
CN108537169A (en) A kind of high-resolution remote sensing image method for extracting roads based on center line and detection algorithm of having a lot of social connections
Li et al. A novel evaluation method for pavement distress based on impact of ride comfort
CN107918775B (en) Zebra crossing detection method and system for assisting safe driving of vehicle
Hu Intelligent road sign inventory (IRSI) with image recognition and attribute computation from video log
CN116597270A (en) Road damage target detection method based on attention mechanism integrated learning network
CN112115800A (en) Vehicle combination recognition system and method based on deep learning target detection
CN115620090A (en) Model training method, low-illumination target re-recognition method and device and terminal equipment
Yang et al. PDNet: Improved YOLOv5 nondeformable disease detection network for asphalt pavement
Li et al. Automated classification and detection of multiple pavement distress images based on deep learning
Coronado et al. Detection and classification of road signs for automatic inventory systems using computer vision
CN113033363A (en) Vehicle dense target detection method based on deep learning
Aldoski et al. Impact of Traffic Sign Diversity on Autonomous Vehicles: A Literature Review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant