CN114998770A - Highway identifier extraction method and system - Google Patents

Highway identifier extraction method and system Download PDF

Info

Publication number
CN114998770A
CN114998770A CN202210795758.3A CN202210795758A CN114998770A CN 114998770 A CN114998770 A CN 114998770A CN 202210795758 A CN202210795758 A CN 202210795758A CN 114998770 A CN114998770 A CN 114998770A
Authority
CN
China
Prior art keywords
image
unmanned aerial
remote sensing
aerial vehicle
vehicle remote
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210795758.3A
Other languages
Chinese (zh)
Other versions
CN114998770B (en
Inventor
王勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Geographic Sciences and Natural Resources of CAS
Original Assignee
Institute of Geographic Sciences and Natural Resources of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Geographic Sciences and Natural Resources of CAS filed Critical Institute of Geographic Sciences and Natural Resources of CAS
Priority to CN202210795758.3A priority Critical patent/CN114998770B/en
Publication of CN114998770A publication Critical patent/CN114998770A/en
Application granted granted Critical
Publication of CN114998770B publication Critical patent/CN114998770B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to a highway identification extraction method, which belongs to the field of highway identification extraction and comprises the following steps: acquiring an original unmanned aerial vehicle remote sensing image; carrying out binarization processing on the original unmanned aerial vehicle remote sensing image to obtain a first image; carrying out corrosion and closing operation treatment on the first image, and separating an adhesion area to obtain a second image; calculating the area of a connected region in the second image, and extracting a part with the largest area as an interference region; performing expansion operation on the interference area to obtain a third image; subtracting the third image from the original unmanned aerial vehicle remote sensing image, and eliminating ground object interference to obtain a fourth image; constructing a vehicle detection model; inputting the fourth image to the vehicle detection model, and identifying a vehicle in the fourth image; and removing the vehicles in the fourth image to obtain the road identification. The scheme of the invention can accurately identify the road mark.

Description

Highway identification extraction method and system
Technical Field
The invention relates to the crossing field of road engineering and artificial intelligence, in particular to a method and a system for extracting highway identification.
Background
The total road mileage in china has shown a rapidly increasing trend over the past decades, but with it has been a more complex problem of road maintenance. Common road diseases, such as the fuzziness and breakage of road marks, occur on part of roads due to long service life. The highway marking lines have important functions in the aspects of providing information, dividing lanes, guiding vehicles and the like, so that the highway inspection method is important for the future development of the highway field in China to solve or prevent the problems.
Road signs, which are the most important parts of roads, provide drivers with a lot of information because of the friction of wheels and natural effects (such as wind and sunlight) for a long time, which causes the wear and tear of many road signs. Many road signs have the requirement of redrawing, and how to quickly and stably detect the part of the road signs and achieve high frequency for cruising on the road becomes a critical requirement.
The traditional manpower or ground penetrating radar needs to consume a large amount of manpower to meet the frequency of the road inspection requirement, and meanwhile, the data resolution and the revisit period obtained by aerial remote sensing and satellite remote sensing are difficult to meet the requirement. But the rise of the remote sensing technology of the unmanned aerial vehicle provides a new visual angle and a new solution for the problem: benefiting from the ultrahigh resolution and the extremely limited revisit period of unmanned aerial vehicle remote sensing, the method for routing inspection of the road problems by using the unmanned remote sensing technology becomes a new idea.
However, as an emerging technology, the data set remotely sensed by the unmanned aerial vehicle is slightly insufficient, the data set remotely sensed by the unmanned aerial vehicle is relatively comprehensive in the aspects of target detection and target tracking, but the data set in the aspect of semantic segmentation is relatively missing, so that the rapid extraction of the road identification by the technical method of semantic segmentation is temporarily impossible. At present, the mainstream processing methods are mainly used in combination with the traditional graphics method or remote sensing methods, such as morphological operation, threshold segmentation, object segmentation, and the like.
In order to extract the road identification from the unmanned aerial vehicle remote sensing image, two difficulties need to be overcome, one is the interference of the complex ground objects in the unmanned aerial vehicle image to the image, and the other is the interference of a large number of vehicles on the road to further identification extraction.
In order to overcome the two difficulties, the invention provides a road sign extraction technology which applies the technologies of morphological operation, threshold segmentation, deep learning YoloV5 and the like, and realizes the segmentation of road areas on images and the extraction of road vehicles.
Disclosure of Invention
The invention aims to provide a road sign extraction method and a road sign extraction system, which are used for realizing the segmentation of a road area on an image and the extraction of road vehicles.
In order to achieve the purpose, the invention provides the following scheme:
a highway identification extraction method comprises the following steps:
acquiring an original unmanned aerial vehicle remote sensing image;
carrying out binarization processing on the original unmanned aerial vehicle remote sensing image to obtain a first image;
carrying out corrosion and closing operation treatment on the first image, and separating an adhesion area to obtain a second image;
calculating the area of a connected region in the second image, and extracting a part with the largest area as an interference region;
performing expansion operation on the interference area to obtain a third image;
subtracting the third image from the original unmanned aerial vehicle remote sensing image, and eliminating ground object interference to obtain a fourth image;
constructing a vehicle detection model;
inputting the fourth image to the vehicle detection model, and identifying a vehicle in the fourth image;
and removing the vehicles in the fourth image to obtain the road identification.
Optionally, after the step "obtaining the original unmanned aerial vehicle remote sensing image", the extracting method further includes, before the step "performing binarization processing on the original unmanned aerial vehicle remote sensing image to obtain the first image": and preprocessing the original unmanned aerial vehicle remote sensing image.
Optionally, the preprocessing of the original unmanned aerial vehicle remote sensing image specifically includes:
stretching and enhancing contrast of the gray value of the original unmanned aerial vehicle remote sensing image;
carrying out data denoising on the original unmanned aerial vehicle remote sensing image;
and performing label format arrangement on the original unmanned aerial vehicle remote sensing image.
Optionally, the constructing a vehicle detection model specifically includes:
extracting a training set from the preprocessed original unmanned aerial vehicle remote sensing image;
and training the Yolo5m model by using the training set to obtain a vehicle detection model.
Based on the above method of the present invention, the present invention further provides a road sign extraction system, including:
the original image acquisition module is used for acquiring an original unmanned aerial vehicle remote sensing image;
the binarization processing module is used for carrying out binarization processing on the original unmanned aerial vehicle remote sensing image to obtain a first image;
the corrosion and closing operation processing module is used for carrying out corrosion and closing operation processing on the first image and separating the adhesion area to obtain a second image;
the area calculation module of the connected region is used for calculating the area of the connected region in the second image and extracting the part with the largest area as an interference region;
the expansion operation module is used for performing expansion operation on the interference area to obtain a third image;
the difference making module is used for making a difference between the third image and the original unmanned aerial vehicle remote sensing image, eliminating ground object interference and obtaining a fourth image;
the vehicle detection model building module is used for building a vehicle detection model;
the vehicle identification module is used for inputting the fourth image to the vehicle detection model and identifying the vehicle in the fourth image;
and the road mark determining module is used for removing the vehicles in the fourth image to obtain the road mark.
Optionally, the extraction system further includes, between the original image obtaining module and the binarization processing module: and a preprocessing module.
Optionally, the preprocessing module specifically includes:
the gray value stretching unit is used for stretching the gray value of the original unmanned aerial vehicle remote sensing image to enhance the contrast;
the denoising unit is used for denoising the data of the original unmanned aerial vehicle remote sensing image;
and the tag format sorting unit is used for performing tag format sorting on the original unmanned aerial vehicle remote sensing image.
Optionally, the vehicle detection model building module specifically includes:
the training set acquisition module is used for extracting a training set from the preprocessed original unmanned aerial vehicle remote sensing image;
and the training module is used for training the Yolo5m model by using the training set to obtain a vehicle detection model.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
according to the method, firstly, data are preprocessed, secondly, morphological operation is carried out on the image to eliminate interference items around the road, so that a road with less interference can be obtained, vehicles of the road are extracted by using a trained YoloV5 model, the vehicles are eliminated from the image, and finally, the road mark is extracted from the image, so that the identification precision is greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flow chart of a method for extracting road signs according to an embodiment of the present invention;
FIG. 2 is an original image according to an embodiment of the present invention;
FIG. 3 is an image after binarization processing according to an embodiment of the invention;
FIG. 4 is a schematic representation of an embodiment of the present invention after expansion of the non-reticle field;
FIG. 5 is an image after differencing according to an embodiment of the invention;
FIG. 6 is an image before gray stretching according to an embodiment of the present invention;
FIG. 7 is an image after gray stretching according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a daytime image detection result according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating a night image detection result according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating vehicle identification results according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a road sign detection result according to an embodiment of the present invention;
FIG. 12 is a diagram of original remote sensing images and corresponding results at different positions of a road segment according to an embodiment of the present invention;
FIG. 13 is a diagram illustrating evaluation results and extraction results according to an embodiment of the present invention;
fig. 14 is a schematic structural diagram of a road sign extraction system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a road sign extraction method and a road sign extraction system, which are used for realizing the segmentation of a road area on an image and the extraction of road vehicles.
In order to make the aforementioned objects, features and advantages of the present invention more comprehensible, the present invention is described in detail with reference to the accompanying drawings and the detailed description thereof.
The road mark means: the road surface is used for marking lanes, indicating directions, providing information and the like, and is a series of white or yellow lane lines, guide lines, zebra stripes and the like, so that the method is particularly important for identifying road signs.
The invention provides a method for extracting a road ground identifier from an unmanned aerial vehicle remote sensing image based on a morphological method, a threshold segmentation method and a target recognition method in deep learning. The method mainly adopts two steps to extract the highway identification.
The method comprises the first step of converting an original image into a gray image and then realizing the separation of the road mark and the road by using an algorithm of maximum between-class variance (OTSU algorithm), wherein the road mark is yellow and white, the road surface is gray and black, and the gray value difference of the road mark and the road is large. However, in this step, surrounding features and road vehicles are segmented at the same time, and both will generate images for subsequent extraction.
There are therefore two important steps required for the subsequent process: 1) and (2) finishing the elimination of the interference of surrounding ground objects, and 2) finishing the elimination of the interference of the vehicle.
Because the difference of the gray values of surrounding ground objects is large, the influence of the surrounding ground objects is large in the binarization processing process, the method adopted by the invention comprises a threshold segmentation method and morphological operations, and the separation of the road from the surrounding objects is realized.
Then, since the road surface is often disturbed by vehicles, it is necessary to perform the vehicle elimination from the image at this step. However, because the color features and the texture features of the vehicles are different and the mutual covering relationship with the road signs makes the threshold segmentation method, the morphological operation method and the object segmentation method in the traditional image processing perform poorly at this stage, the invention adopts the machine learning method and uses the network of YoloV5m for training, so that the vehicles are extracted from the images, and the vehicles are removed from the images, and the influence of the vehicles on the sign extraction process is removed.
At this time, after the two steps of road extraction and vehicle removal are completed, the remaining road surface only includes the road sign, so that the extraction of the road sign is completed. This scheme is described in detail below.
Fig. 1 is a flowchart of a method for extracting a road sign according to an embodiment of the present invention, and as shown in fig. 1, the method includes:
step 101: and acquiring an original unmanned aerial vehicle remote sensing image.
The data in the invention selects a UAVDT data set, the data comprises 40000 pieces of data with marks, and the data only marks three different vehicles on the road surface, which comprises: car, Truck, Bus. Each picture was 1024px 540px, RGB triple channel. Meanwhile, the data sets cover different times (morning, daytime, night), different weather (rainy, foggy), and different ground scenes.
In the technical scheme of the invention, the UAVDT data set mainly plays two roles, one of the two roles is used as a training data set for training a vehicle positioning model, and the model plays a great auxiliary role in the road identification extraction process; the second is as image data, and the invention will extract road sign from the series of image data.
Step 102: and carrying out binarization processing on the original unmanned aerial vehicle remote sensing image to obtain a first image.
The original image is shown in fig. 2, and the binarized image is shown in fig. 3.
Step 103: and carrying out corrosion and closing operation treatment on the first image, and separating an adhesion area to obtain a second image.
In order to extract the road marking area better, other areas in the image need to be removed in an attempt. The method comprises the following steps of firstly carrying out corrosion operation on an original image and then carrying out closed operation, so that the partition in the image is more obvious, and the phenomenon that a plurality of small areas close to the image are mistakenly divided into a large area to be removed is avoided.
Step 104: and calculating the area of the connected region in the second image, and extracting the part with the largest area as an interference region.
The method used in the invention is to make statistical calculation of the area of each connected domain in the image, then average the area of the connected domains, and select the part of the image larger than the average area to obtain the non-marked line region, i.e. the interference region.
Step 105: and performing expansion operation on the interference area to obtain a third image.
The non-marked regions overlap less with the target region and a larger kernel can be used to expand the non-marked regions to enlarge the partial edges. This allows better removal of non-marked parts when the original is bad. After the expansion operation, this is shown in fig. 4. The difference is made without expansion operation and after expansion operation, the former can reserve extra edge lines, which can cause certain obstruction to the subsequent steps, and the latter can well clear the non-marked parts.
Step 106: and subtracting the third image from the original unmanned aerial vehicle remote sensing image, and eliminating the ground object interference to obtain a fourth image, as shown in fig. 5.
The steps 101 to 106 are mainly implemented to clear the clutter.
In this case, the remaining portion of the image is mainly the reticle, but a target of a non-reticle region remains, and the remaining reticle region may be damaged by using a morphological method, and it is feasible to perform threshold segmentation of RGB channels on the remaining region for further culling. For the image to be tested by the invention, the original image does not need to be subjected to gray threshold segmentation.
After the above steps are completed, the mark on the road can be clearly seen, but at the same time, the vehicle can be seen to be marked into the mark area, and the gray value characteristics are almost the same as the road mark and are too close to the road mark due to the total pixel size of part of the vehicles, so that the traditional method using morphology and threshold segmentation is poor in effect, and at the moment, the invention adopts a deep learning method to remove the vehicle position after the vehicle position is detected from the RGB image.
The step mainly adopts morphological operation, and completes the clearing of the interference of surrounding ground objects, and then the extraction of the vehicle interference on the road surface is carried out.
Step 107: and constructing a vehicle detection model.
The method specifically comprises the following steps:
extracting a training set from the preprocessed original unmanned aerial vehicle remote sensing image;
and training the Yolo5m model by using the training set to obtain a vehicle detection model.
In addition, in order to further improve the accuracy of the model, before training the Yolo5m model, data needs to be preprocessed, which mainly includes:
data denoising, contrast enhancement (gray scale stretching), and label formatting.
Common filtering modes for data denoising include median filtering, mean filtering, gaussian filtering, and the like. The median filtering is to use the median of the gray value of the pixel in the neighborhood to replace the target pixel, the mean filtering is to use the average of the gray value of the pixel in the neighborhood to replace the target pixel, and the Gaussian filtering is to set the standard deviation of the horizontal and vertical directions of the convolution kernel and then to follow a two-dimensional Gaussian function. The lowest consumption of operation resources in the three filtering modes is median filtering, so that the median filtering method is selected for denoising.
The gray value stretching is an important means for image enhancement, and can effectively improve the definition of the low-contrast image. The gray scale distribution in the low-contrast image is severely unbalanced and is often distributed in a narrow gray scale value range. The gray value of the image can be uniformly distributed as much as possible by a gray value stretching method, so that the quality of the image is improved.
As described above, in the UAVDT data set, due to a complex environment including night, rainy and foggy days, the overall gray-scale value of the image is low, and in order to enhance the contrast of the image, a gray-scale value stretching method is adopted to map the maximum gray-scale value in the image to 255, and simultaneously map the minimum gray-scale value to 0, so as to increase the overall contrast. As shown in fig. 6 and 7, before and after distraction, respectively.
Before deep learning training, the format of the label needs to be modified, and the format given in the original data does not conform to the Yolo data format, as shown below.
TABLE 1 UAVDT tag Format
Figure BDA0003731866330000081
TABLE 2 Yolo Standard data Format
Figure BDA0003731866330000091
From this, the formula for the data conversion can be derived as follows:
Figure BDA0003731866330000092
after pretreatment, the Yolo5m model was trained. After the model is trained, the efficiency of the model is tested, the model is used for marking the pictures in the test set, the average time of each picture is about 25 milliseconds, the original image and the target detection result are shown in fig. 8 and 9, fig. 8 is a schematic diagram of the daytime detection result, and fig. 9 is a schematic diagram of the nighttime detection result.
It can be seen that the trained model results have a high detection accuracy for the vehicle owner on the road. In the subsequent process of road identification, the vehicles can be screened out through the position of the positioning frame provided by the detection result.
Step 108: inputting the fourth image to the vehicle detection model, identifying a vehicle in the fourth image.
Step 109: and removing the vehicles in the fourth image to obtain the road identification.
The above-mentioned operations have implemented the extraction of the identified region, and after the training of the Yolo5m model is completed, the original image is input into Yolo5m for detection, the position of the vehicle in the image is obtained, the corresponding region is erased through the positioning result, and the elimination of the vehicle on the road surface is implemented, as shown in the right side of fig. 10 (see the block part).
After the vehicle region is removed, the remaining target of the image can be regarded as a road marking region. The extraction of the road identification area is completed. A schematic diagram of the extraction results is shown in FIG. 11.
In order to evaluate the stability of the method, several images at different positions are captured on the same road and evaluated in an attempt, and fig. 11 is an original remote sensing image and a corresponding result graph at different positions of the road section: it can be seen that the algorithm has a better road sign extraction effect on the road section, especially when there is no fog shielding, the extraction effect is almost the same as the sign area of the image, and the extraction of the road sign is very smooth. However, the image with the interference of fog has a poor extraction effect, and particularly, in the area covered by fog, the road sign can hardly be extracted. Meanwhile, even if the road sign extraction device has low contrast and definition, the extraction effect of the road sign is still very good as long as the region is not completely covered by fog. The test result of the highway section illustrates the feasibility of the method, and simultaneously, the stability of the method is proved to a certain extent by certain anti-interference capability of a person with mild severe weather conditions
As described above, this method is stable when there is less fog, and for accurate evaluation, the region with good stability (thinner fog) is selected for evaluation.
The evaluation result and the extraction result are shown in fig. 13, and the accuracy evaluation can be performed on the above results, and the average Precision is 92.23% and the average Recall is 91.72%.
In summary, even in the presence of heavy fog weather, the method can still achieve stable road sign extraction, and stability and accuracy of the method are demonstrated.
Based on the above method in the present invention, the present invention further provides a road sign extraction system, as shown in fig. 14, the system includes:
and the original image acquisition module 201 is used for acquiring an original unmanned aerial vehicle remote sensing image.
And the binarization processing module 202 is used for performing binarization processing on the original unmanned aerial vehicle remote sensing image to obtain a first image.
And the corrosion and closing operation processing module 203 is configured to perform corrosion and closing operation processing on the first image, and separate the adhesion area to obtain a second image.
And the area calculation module 204 for the connected region is configured to calculate the area of the connected region in the second image, and extract the portion with the largest area as the interference region.
And the expansion operation module 205 is configured to perform an expansion operation on the interference region to obtain a third image.
And a difference making module 206, configured to make a difference between the third image and the original unmanned aerial vehicle remote sensing image, and remove ground object interference to obtain a fourth image.
And the vehicle detection model building module 207 is used for building a vehicle detection model.
And a vehicle identification module 208, configured to input the fourth image to the vehicle detection model, and identify a vehicle in the fourth image.
And a road identifier determining module 209, configured to remove the vehicle in the fourth image to obtain a road identifier.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (8)

1. A road sign extraction method is characterized by comprising the following steps:
acquiring an original unmanned aerial vehicle remote sensing image;
carrying out binarization processing on the original unmanned aerial vehicle remote sensing image to obtain a first image;
carrying out corrosion and closing operation treatment on the first image, and separating an adhesion area to obtain a second image;
calculating the area of a connected region in the second image, and extracting a part with the largest area as an interference region;
performing expansion operation on the interference area to obtain a third image;
subtracting the third image from the original unmanned aerial vehicle remote sensing image, and eliminating ground object interference to obtain a fourth image;
constructing a vehicle detection model;
inputting the fourth image to the vehicle detection model, and identifying a vehicle in the fourth image;
and removing the vehicles in the fourth image to obtain the road identification.
2. The road sign extraction method according to claim 1, wherein after the step of "obtaining an original unmanned aerial vehicle remote sensing image", the step of "performing binarization processing on the original unmanned aerial vehicle remote sensing image to obtain a first image" further comprises: and preprocessing the original unmanned aerial vehicle remote sensing image.
3. The road sign extraction method according to claim 2, wherein the preprocessing of the original unmanned aerial vehicle remote sensing image specifically comprises:
stretching and enhancing the contrast ratio of the gray value of the original unmanned aerial vehicle remote sensing image;
carrying out data denoising on the original unmanned aerial vehicle remote sensing image;
and performing label format arrangement on the original unmanned aerial vehicle remote sensing image.
4. The road sign extraction method according to claim 2, wherein the constructing of the vehicle detection model specifically comprises:
extracting a training set from the preprocessed original unmanned aerial vehicle remote sensing image;
and training the Yolo5m model by using the training set to obtain a vehicle detection model.
5. A road sign extraction system, the extraction system comprising:
the original image acquisition module is used for acquiring an original unmanned aerial vehicle remote sensing image;
the binarization processing module is used for carrying out binarization processing on the original unmanned aerial vehicle remote sensing image to obtain a first image;
the corrosion and closing operation processing module is used for carrying out corrosion and closing operation processing on the first image and separating the adhesion area to obtain a second image;
the area calculation module of the connected region is used for calculating the area of the connected region in the second image and extracting the part with the largest area as an interference region;
the expansion operation module is used for performing expansion operation on the interference area to obtain a third image;
the difference making module is used for making a difference between the third image and the original unmanned aerial vehicle remote sensing image, eliminating ground object interference and obtaining a fourth image;
the vehicle detection model building module is used for building a vehicle detection model;
the vehicle identification module is used for inputting the fourth image to the vehicle detection model and identifying the vehicle in the fourth image;
and the road mark determining module is used for removing the vehicles in the fourth image to obtain the road mark.
6. The road sign extraction system according to claim 5, wherein the extraction system further comprises, between the original image acquisition module and the binarization processing module: and a preprocessing module.
7. The road sign extraction system of claim 6, wherein the preprocessing module specifically comprises:
the gray value stretching unit is used for stretching the gray value of the original unmanned aerial vehicle remote sensing image to enhance the contrast;
the denoising unit is used for denoising the data of the original unmanned aerial vehicle remote sensing image;
and the tag format sorting unit is used for performing tag format sorting on the original unmanned aerial vehicle remote sensing image.
8. The road sign extraction system of claim 6, wherein the vehicle detection model building module specifically comprises:
the training set acquisition module is used for extracting a training set from the preprocessed original unmanned aerial vehicle remote sensing image;
and the training module is used for training the Yolo5m model by using the training set to obtain a vehicle detection model.
CN202210795758.3A 2022-07-06 2022-07-06 Highway identifier extraction method and system Active CN114998770B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210795758.3A CN114998770B (en) 2022-07-06 2022-07-06 Highway identifier extraction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210795758.3A CN114998770B (en) 2022-07-06 2022-07-06 Highway identifier extraction method and system

Publications (2)

Publication Number Publication Date
CN114998770A true CN114998770A (en) 2022-09-02
CN114998770B CN114998770B (en) 2023-04-07

Family

ID=83018940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210795758.3A Active CN114998770B (en) 2022-07-06 2022-07-06 Highway identifier extraction method and system

Country Status (1)

Country Link
CN (1) CN114998770B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789080A (en) * 2010-01-21 2010-07-28 上海交通大学 Detection method for vehicle license plate real-time positioning character segmentation
CN104197897A (en) * 2014-04-25 2014-12-10 厦门大学 Urban road marker automatic sorting method based on vehicle-mounted laser scanning point cloud
CN105389556A (en) * 2015-11-10 2016-03-09 中南大学 High-resolution-remote-sensing-image vehicle detection method considering shadow region
CN105718870A (en) * 2016-01-15 2016-06-29 武汉光庭科技有限公司 Road marking line extracting method based on forward camera head in automatic driving
CN106652551A (en) * 2016-12-16 2017-05-10 浙江宇视科技有限公司 Parking stall detection method and device
CN107330380A (en) * 2017-06-14 2017-11-07 千寻位置网络有限公司 Lane line based on unmanned plane image is automatically extracted and recognition methods
CN107944407A (en) * 2017-11-30 2018-04-20 中山大学 A kind of crossing zebra stripes recognition methods based on unmanned plane
CN108734105A (en) * 2018-04-20 2018-11-02 东软集团股份有限公司 Method for detecting lane lines, device, storage medium and electronic equipment
CN111488808A (en) * 2020-03-31 2020-08-04 杭州诚道科技股份有限公司 Lane line detection method based on traffic violation image data
CN112071076A (en) * 2020-08-25 2020-12-11 浙江省机电设计研究院有限公司 Method and system for extracting unique identification features of vehicles on highway
CN112149595A (en) * 2020-09-29 2020-12-29 爱动超越人工智能科技(北京)有限责任公司 Method for detecting lane line and vehicle violation by using unmanned aerial vehicle
US20210019547A1 (en) * 2019-07-17 2021-01-21 Cognizant Technology Solutions India Pvt. Ltd. System and a method for efficient image recognition
CN112488046A (en) * 2020-12-15 2021-03-12 中国科学院地理科学与资源研究所 Lane line extraction method based on high-resolution images of unmanned aerial vehicle
CN112733800A (en) * 2021-01-22 2021-04-30 中国科学院地理科学与资源研究所 Remote sensing image road information extraction method and device based on convolutional neural network
CN113888397A (en) * 2021-10-08 2022-01-04 云南省烟草公司昆明市公司 Tobacco pond cleaning and plant counting method based on unmanned aerial vehicle remote sensing and image processing technology
CN114267023A (en) * 2020-09-16 2022-04-01 奥迪股份公司 Road sign recognition and processing method, system, computer device and storage medium
CN114463684A (en) * 2022-02-14 2022-05-10 内蒙古工业大学 Urban highway network-oriented blockage detection method

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789080A (en) * 2010-01-21 2010-07-28 上海交通大学 Detection method for vehicle license plate real-time positioning character segmentation
CN104197897A (en) * 2014-04-25 2014-12-10 厦门大学 Urban road marker automatic sorting method based on vehicle-mounted laser scanning point cloud
CN105389556A (en) * 2015-11-10 2016-03-09 中南大学 High-resolution-remote-sensing-image vehicle detection method considering shadow region
CN105718870A (en) * 2016-01-15 2016-06-29 武汉光庭科技有限公司 Road marking line extracting method based on forward camera head in automatic driving
CN106652551A (en) * 2016-12-16 2017-05-10 浙江宇视科技有限公司 Parking stall detection method and device
CN107330380A (en) * 2017-06-14 2017-11-07 千寻位置网络有限公司 Lane line based on unmanned plane image is automatically extracted and recognition methods
CN107944407A (en) * 2017-11-30 2018-04-20 中山大学 A kind of crossing zebra stripes recognition methods based on unmanned plane
CN108734105A (en) * 2018-04-20 2018-11-02 东软集团股份有限公司 Method for detecting lane lines, device, storage medium and electronic equipment
US20210019547A1 (en) * 2019-07-17 2021-01-21 Cognizant Technology Solutions India Pvt. Ltd. System and a method for efficient image recognition
CN111488808A (en) * 2020-03-31 2020-08-04 杭州诚道科技股份有限公司 Lane line detection method based on traffic violation image data
CN112071076A (en) * 2020-08-25 2020-12-11 浙江省机电设计研究院有限公司 Method and system for extracting unique identification features of vehicles on highway
CN114267023A (en) * 2020-09-16 2022-04-01 奥迪股份公司 Road sign recognition and processing method, system, computer device and storage medium
CN112149595A (en) * 2020-09-29 2020-12-29 爱动超越人工智能科技(北京)有限责任公司 Method for detecting lane line and vehicle violation by using unmanned aerial vehicle
CN112488046A (en) * 2020-12-15 2021-03-12 中国科学院地理科学与资源研究所 Lane line extraction method based on high-resolution images of unmanned aerial vehicle
CN112733800A (en) * 2021-01-22 2021-04-30 中国科学院地理科学与资源研究所 Remote sensing image road information extraction method and device based on convolutional neural network
CN113888397A (en) * 2021-10-08 2022-01-04 云南省烟草公司昆明市公司 Tobacco pond cleaning and plant counting method based on unmanned aerial vehicle remote sensing and image processing technology
CN114463684A (en) * 2022-02-14 2022-05-10 内蒙古工业大学 Urban highway network-oriented blockage detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王宝梁等: "引导智能汽车行驶的公路交通标识线识别与提取", 《江苏农机化》 *

Also Published As

Publication number Publication date
CN114998770B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN105373794B (en) A kind of licence plate recognition method
CN103927526B (en) Vehicle detecting method based on Gauss difference multi-scale edge fusion
CN103049763B (en) Context-constraint-based target identification method
CN101408942B (en) Method for locating license plate under a complicated background
CN102509091B (en) Airplane tail number recognition method
CN109255350B (en) New energy license plate detection method based on video monitoring
CN107273896A (en) A kind of car plate detection recognition methods based on image recognition
CN107705301B (en) Highway marking damage detection method based on unmanned aerial vehicle aerial highway image
CN104778721A (en) Distance measuring method of significant target in binocular image
CN109448000B (en) Segmentation method of traffic direction sign image
CN103020605A (en) Bridge identification method based on decision-making layer fusion
CN108509950B (en) Railway contact net support number plate detection and identification method based on probability feature weighted fusion
CN109886168B (en) Ground traffic sign identification method based on hierarchy
CN107704833A (en) A kind of front vehicles detection and tracking based on machine learning
CN107133588A (en) Vehicle identification method based on vehicle window feature extraction
CN111259796A (en) Lane line detection method based on image geometric features
Larsen et al. Traffic monitoring using very high resolution satellite imagery
Li et al. Pixel-level recognition of pavement distresses based on U-Net
CN114998770B (en) Highway identifier extraction method and system
Phu et al. Traffic sign recognition system using feature points
CN110853000A (en) Detection method of track
CN112802348B (en) Traffic flow counting method based on mixed Gaussian model
CN113837094A (en) Road condition rapid analysis method based on full-color high-resolution remote sensing image
CN105740827A (en) Stop line detection and ranging algorithm on the basis of quick sign communication
Singh et al. Road sign recognition using LeNet5 network model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant