CN110427902B - Method and system for extracting traffic signs on aviation image road surface - Google Patents

Method and system for extracting traffic signs on aviation image road surface Download PDF

Info

Publication number
CN110427902B
CN110427902B CN201910728779.1A CN201910728779A CN110427902B CN 110427902 B CN110427902 B CN 110427902B CN 201910728779 A CN201910728779 A CN 201910728779A CN 110427902 B CN110427902 B CN 110427902B
Authority
CN
China
Prior art keywords
image
gabor
gray
component
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910728779.1A
Other languages
Chinese (zh)
Other versions
CN110427902A (en
Inventor
黄亮
陈朋弟
姚丙秀
王枭轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunming University of Science and Technology
Original Assignee
Kunming University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming University of Science and Technology filed Critical Kunming University of Science and Technology
Priority to CN201910728779.1A priority Critical patent/CN110427902B/en
Publication of CN110427902A publication Critical patent/CN110427902A/en
Application granted granted Critical
Publication of CN110427902B publication Critical patent/CN110427902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an extraction method and system for an aerial image road traffic sign. Firstly, carrying out gray level conversion on a multispectral aerial image to obtain a gray level image; then, carrying out texture feature extraction and image enhancement on the obtained gray level image by adopting Gabor filtering and two-dimensional convolution operation; secondly, performing edge detection and binary segmentation on the enhanced image by using a Sobel operator to obtain a binary edge image; and finally, extracting and filling the road traffic signs from the binary edge images by using a minimum and maximum object deletion method. The method can effectively extract the road traffic signs under different light rays, improves the extraction precision of the road traffic signs, and has stronger robustness and application value.

Description

Method and system for extracting traffic signs on aviation image road surface
Technical Field
The invention relates to the technical field of image processing, in particular to an extraction method and system for an aerial image road traffic sign.
Background
With the development of remote sensing technology, remote sensing image data is unprecedentedly applied, including applications in road traffic. However, the current remote sensing image data cannot extract various detailed information in the road due to the influence of resolution, such as extraction and positioning of road traffic signs. The aerial image is widely applied to various aspects such as agriculture, traffic and the like due to the convenience in shooting, high acquisition speed, high resolution and good operation superiority, and the condition for extracting large-area traffic signs is provided by the appearance of the aerial image. The effective extraction and positioning of the road traffic signs can be used for a vehicle system, and can provide a detection means for road maintenance personnel, for example, road management personnel can detect and judge the damage condition of the road traffic signs through aerial images in two time phases, so that relevant position areas can be quickly and conveniently positioned and repaired. Compared with the traditional site survey, the method effectively improves the working efficiency and greatly reduces the consumption of manpower, material resources and financial resources.
A great deal of research has been carried out at home and abroad on the detection and identification work of road traffic signs and roadside traffic sign board information, and a plurality of methods are proposed. The existing extraction method of the road traffic sign mainly utilizes a vehicle-mounted camera to obtain the road information in front of a vehicle, and then detects the sign by different methods. For example, li qiang published article "road traffic marking detection and identification based on multi-level fusion and convolutional neural network [ D ]. seiran: in the university of Changan, 2018', texture information is corrected by utilizing Gabor transformation (Gabor transformation), then the optimal vanishing point of a road is determined by adopting a local soft voting method, a road area is segmented by utilizing a linear programming principle, and finally a traffic sign is detected and identified by utilizing a LeNet-5 neural network with significance fusion and improvement; the article "Wu T, Ranganathan A.A proactive system for roadmarking detection and recording [ C ]. intelligentingvehicles Symposium, IEEE,2012,7(2272): 25-30" obtains a marking template of a road through training data, and then performs template matching using MSER (maximum stable extreme area) characteristics to detect a plurality of road markings; radeudanesecu and the like in the article "Danescu R, Nedevschi S.Detection and classification of painted road object for intersection analysis applications [ C ]. International IEEE Conference on Intelligent Transportation Systems, IEEE 2010: 433-; GeorgMaier and the like in the article "Maier G, Pagerl S, SchinderA.real-time detection and classification of arrow marking using curve-based targeting type matching [ C ]. Intellignet vehicles Symposium, IEEE,2011: 442-447" proposes a method for detecting and classifying arrow marks by using curve model fitting, firstly extracts an area of interest, namely an area with a road sign in front of a vehicle, then codes the area as a circular arc spline, and then compares the area with an extracted target candidate contour to realize the detection and classification of the arrow marks. In related researches, there are also many methods for transforming an obtained image by IPM (inverse perspective) and then detecting, for example, Markus Schreiber in the article "Schreiber m, pogenhans F, still c.detecting Symbols on Road Surface for Mapping and Localization using OCR [ C ]. IEEE International Conference on Intelligent transmission Systems, IEEE, IPM 2014, 597-type 602" trains OCR (optical character recognition) by using vector diagrams, adopts vanishing points and obtains top views by IPM transformation, and finally classifies Symbols by using an TESSERACT engine; the Nan Wang in the article "Wang N, W, Zhang C M, Yuan H, Liu J R. the detection and registration of the arrow marks registration Based on cellular vision [ C ]. IEEE International Symposium on Industrial Electronics, IEEE,2009, 4380-4386" obtains a top view by IPM transformation, extracts arrow mark characteristics by using an improved Haar wavelet, and finally identifies an arrow by a support vector machine, and the result shows that the method has stronger robustness. In addition, much research has been conducted on road traffic sign detection and recognition using machine learning and deep learning, such as The article "LiuW, Lv J, YuB, et al, multi-type roadmarking recognition using The perspective detection and expression learning machine classification [ C ]. Intelligent vehicle symposium (IV), IEEE,2015, 41-46" in Wei Liu et al, first, The perspective effect is suppressed by IPM variation, and an image slice with road markings is obtained by filtering, and then, The learning machine classification of The type of road markings is recognized by Haar-Feature Adaboost classifier and BW-HOG (The ratio of Feature's best-Category with great-factor-Category sum of squares-histogram-oriented Gradient, machine limit classification of inter-and intra-class features and ratio-direction square features).
In summary, the traffic sign is detected and identified mainly by acquiring road information in front of the vehicle through a vehicle-mounted camera, and then detecting and identifying the traffic sign by adopting different methods. However, the aviation image and the vehicle-mounted camera have different imaging and data acquisition modes, so that the method is difficult to be directly used for extracting the road signs of the aviation image and has the problem of low extraction precision of the road signs of the road surface.
Disclosure of Invention
The invention aims to provide an extraction method and system of an aerial image road traffic sign, and aims to solve the problems that the existing detection and identification method of the traffic sign is difficult to be directly used for extracting the aerial image road sign and the extraction precision of the road traffic sign is low.
In order to achieve the purpose, the invention provides the following scheme:
an aerial image road traffic sign extraction method, comprising:
acquiring a multispectral aerial image containing a road traffic sign; the multispectral aerial image is an RGB image;
converting the multispectral aerial image into a gray level image;
adopting Gabor filtering and two-dimensional convolution operation to enhance the gray level image to generate an enhanced image;
adopting a Sobel edge detection operator to carry out edge detection on the enhanced image to obtain a gray edge image;
segmenting the gray scale edge image by using a threshold segmentation method to obtain a binary edge image;
and deleting the minimum object and the maximum object of the binary edge image by adopting a minimum and maximum object deletion method, and extracting the road traffic sign in the multispectral aerial image.
Optionally, the converting the multispectral aerial image into a grayscale image specifically includes:
converting the multispectral aerial image into a gray image by adopting a formula GI (x, y) ═ 0.299 xR +0.587 xG +0.114 xB; r, G, B are color values of three channels of the multispectral aerial image R, G, B respectively; GI (x, y) is the pixel brightness value at (x, y), GI (x, y) is more than or equal to 0 and less than or equal to 255, and x and y represent the horizontal and vertical coordinates of the pixel point.
Optionally, the enhancing processing is performed on the grayscale image by using Gabor filtering and two-dimensional convolution operation to generate an enhanced image, which specifically includes:
using two-dimensional Gabor filters
Figure BDA0002159821670000041
Carrying out Gabor filtering processing on the gray level image to generate a Gabor filter characteristic diagram Gabor (x, y; lambda, theta, psi, sigma and gamma); wherein x and y represent the horizontal and vertical coordinates of the pixel point (x and y); λ represents the wavelength of the Gabor filter; θ represents the azimuth of the Gabor filter; ψ denotes a phase shift of the Gabor filter; σ represents the standard deviation of the Gaussian function; γ represents a width ratio; x' ═ xcos θ + ysin θ; y' ═ xsin θ + ycos θ; i is an imaginary unit;
using a formula
Figure BDA0002159821670000042
For the gray image GIxy(1. ltoreq. x.ltoreq.M, 1. ltoreq. y.ltoreq.N) and the Gabor filter characteristic diagram GaborxyPerforming two-dimensional convolution operation on { Gabor (x, y; lambda, theta, psi, sigma, gamma), (1 ≦ x ≦ m,1 ≦ y ≦ n) } to generate an enhanced image EI (x, y); m, N, wherein each represents the total number of rows and columns of the grayscale image; m and n respectively represent the total row and column numbers of the Gabor filter.
Optionally, the enhanced image is subjected to edge detection by using a Sobel edge detection operator to obtain a gray edge image, which specifically includes:
using formula Gx=EIx(x, y) ═ EI (x-1, y +1) +2EI (x, y +1) + EI (x +1, y +1) - (EI (x-1, y-1) +2EI (x, y-1) + EI (x +1, y-1)) the enhanced image EI (x, y) is subjected to x-component calculation to generate a composite gradient x-component Gx(ii) a Wherein EIx(x, y) represents the x-component of the enhanced image EI (x, y);
using formula Gy=EIy(x, y) ═ EI (x +1, y-1) +2EI (x +1, y) + EI (x +1, y +1) - (EI (x-1, y-1) +2EI (x-1, y) + EI (x-1, y +1)) the enhanced image EI (x, y) is subjected to y component calculation to generate a composite gradient y component Gy(ii) a Wherein EIy(x, y) represents the y-component of the enhanced image EI (x, y);
using a formula
Figure BDA0002159821670000043
For the synthetic ladderDegree x component GxAnd the resultant gradient y-component GyPerforming synthetic gradient calculation to generate a gray edge image GEI (x, y); wherein G is the synthetic gradient.
Optionally, the segmenting the gray edge image by using a threshold segmentation method to obtain a binary edge image specifically includes:
using a formula
Figure BDA0002159821670000051
Segmenting the gray scale edge image, and converting the gray scale edge image GEI (x, y) into a binary edge image BEI (x, y); where T is the image segmentation threshold.
Optionally, the minimum and maximum object deletion is performed on the binary edge image by using a minimum and maximum object deletion method to extract the road traffic sign in the multispectral aerial image, and the method specifically includes:
deleting the object with the area smaller than the minimum object deletion threshold value in the binary edge image, and generating a minimum object deleted image;
determining the sibling objects of the binary edge image through 8 communication;
and deleting the objects with the area larger than the maximum object deletion threshold value in the sibling objects, and extracting the road traffic signs in the multispectral aerial image.
An aerial image pavement traffic sign extraction system, the system comprising:
the original image acquisition module is used for acquiring a multispectral aerial image containing a road traffic sign; the multispectral aerial image is an RGB image;
the image conversion module is used for converting the multispectral aerial image into a gray level image;
the image enhancement module is used for enhancing the gray level image by adopting Gabor filtering and two-dimensional convolution operation to generate an enhanced image;
the edge detection module is used for carrying out edge detection on the enhanced image by adopting a Sobel edge detection operator to obtain a gray edge image;
the image segmentation module is used for segmenting the gray edge image by utilizing a threshold segmentation method to obtain a binary edge image;
and the minimum and maximum object deleting module is used for deleting the minimum object and the maximum object of the binary edge image by adopting a minimum and maximum object deleting method and extracting the road traffic sign in the multispectral aerial image.
Optionally, the image conversion module specifically includes:
the image conversion unit is used for converting the multispectral aerial image into a gray image by adopting a formula GI (x, y) ═ 0.299 xR +0.587 xG +0.114 xB; r, G, B are color values of three channels of the multispectral aerial image R, G, B respectively; GI (x, y) is the pixel brightness value at (x, y), GI (x, y) is more than or equal to 0 and less than or equal to 255, and x and y represent the horizontal and vertical coordinates of the pixel point.
Optionally, the image enhancement module specifically includes:
a Gabor filter unit for using two-dimensional Gabor filter
Figure BDA0002159821670000061
Carrying out Gabor filtering processing on the gray level image to generate a Gabor filter characteristic diagram Gabor (x, y; lambda, theta, psi, sigma and gamma); wherein x and y represent the horizontal and vertical coordinates of the pixel point (x and y); λ represents the wavelength of the Gabor filter; θ represents the azimuth of the Gabor filter; ψ denotes a phase shift of the Gabor filter; σ represents the standard deviation of the Gaussian function; γ represents a width ratio; x' ═ xcos θ + ysin θ; y' ═ xsin θ + ycos θ; i is an imaginary unit;
a two-dimensional convolution operation unit for adopting a formula
Figure BDA0002159821670000062
For the gray image GIxy(1. ltoreq. x.ltoreq.M, 1. ltoreq. y.ltoreq.N) and the Gabor filter characteristic diagram GaborxyPerforming two-dimensional convolution operation on { Gabor (x, y; lambda, theta, psi, sigma, gamma), (1 ≦ x ≦ m,1 ≦ y ≦ n) } to generate an enhanced image EI (x, y); m, N, wherein each represents the total number of rows and columns of the grayscale image; m and n are respectively substitutedTable Gabor filter total row and column numbers.
Optionally, the edge detection module specifically includes:
x component calculation unit for employing formula Gx=EIx(x, y) ═ EI (x-1, y +1) +2EI (x, y +1) + EI (x +1, y +1) - (EI (x-1, y-1) +2EI (x, y-1) + EI (x +1, y-1)) the enhanced image EI (x, y) is subjected to x-component calculation to generate a composite gradient x-component Gx(ii) a Wherein EIy(x, y) represents the x-component of the enhanced image EI (x, y);
y component calculation unit for employing formula Gy=EIy(x, y) ═ EI (x +1, y-1) +2EI (x +1, y) + EI (x +1, y +1) - (EI (x-1, y-1) +2EI (x-1, y) + EI (x-1, y +1)) the enhanced image EI (x, y) is subjected to y component calculation to generate a composite gradient y component Gy(ii) a Wherein EIy(x, y) represents the y-component of the enhanced image EI (x, y);
a synthetic gradient calculation unit for employing a formula
Figure BDA0002159821670000063
For the resultant gradient x component GxAnd the resultant gradient y-component GyPerforming synthetic gradient calculation to generate a gray edge image GEI (x, y); wherein G is the synthetic gradient.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention provides a method and a system for extracting an aviation image road traffic sign, wherein the method comprises the steps of firstly carrying out gray level conversion on a multispectral aviation image to obtain a gray level image; then, carrying out texture feature extraction and image enhancement on the obtained gray level image by adopting Gabor filtering and two-dimensional convolution operation; and then, carrying out edge detection and binary segmentation on the enhanced image by adopting a Sobel operator to obtain a binary edge image, and finally, extracting and filling the road traffic sign on the binary edge image by utilizing a minimum and maximum object deletion method. The method can effectively extract the road traffic signs under different light rays, improves the extraction precision of the road traffic signs, and has stronger robustness and application value.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a general concept diagram of the method for extracting an aerial image road traffic sign provided by the invention;
FIG. 2 is a flowchart illustrating an embodiment of a method for extracting an aviation image road traffic sign according to the present invention;
FIG. 3 is a schematic view of a multi-spectral aerial image provided in an embodiment of the present invention; fig. 3(a) is a schematic diagram of a first scene multispectral aerial image provided in a first embodiment of the present invention, and fig. 3(b) is a schematic diagram of a second scene multispectral aerial image provided in a second embodiment of the present invention;
fig. 4 is a schematic view of a grayscale image corresponding to a multispectral aerial image provided in an embodiment of the present invention; fig. 4(a) is a schematic diagram of a grayscale image corresponding to a first scene multispectral aerial image provided in a first embodiment of the present invention, and fig. 4(b) is a schematic diagram of a grayscale image corresponding to a second scene multispectral aerial image provided in a second embodiment of the present invention;
FIG. 5 is a histogram of a gray scale image according to an embodiment of the present invention; fig. 5(a) is a grayscale image histogram of a first scene multispectral aerial image according to a first embodiment of the present invention, and fig. 5(b) is a grayscale image histogram of a second scene multispectral aerial image according to a second embodiment of the present invention;
FIG. 6 is a schematic diagram of a Gabor filter signature provided by the present invention; wherein, fig. 6(a) is a schematic size diagram of a Gabor filter, and fig. 6(b) is a schematic real part diagram of the Gabor filter;
fig. 7 is a schematic diagram of an enhanced image corresponding to a multispectral aerial image according to an embodiment of the present invention; fig. 7(a) is a schematic diagram of an enhanced image corresponding to a first scene multispectral aerial image provided in a first embodiment of the present invention, and fig. 7(b) is a schematic diagram of an enhanced image corresponding to a second scene multispectral aerial image provided in a second embodiment of the present invention;
FIG. 8 is a schematic diagram of a Sobel operating template provided by the present invention; wherein, fig. 8(a) is a schematic diagram of a horizontal edge of a pixel template provided by the present invention, and fig. 8(b) is a schematic diagram of a vertical edge of the pixel template provided by the present invention;
fig. 9 is a schematic diagram of a gray-scale edge image corresponding to a multispectral aerial image according to an embodiment of the present invention; fig. 9(a) is a schematic diagram of a gray scale edge image corresponding to a first scene multispectral aerial image provided by a first embodiment of the present invention, and fig. 9(b) is a schematic diagram of a gray scale edge image corresponding to a second scene multispectral aerial image provided by a second embodiment of the present invention;
fig. 10 is a schematic diagram of a binary edge image corresponding to a multispectral aerial image according to an embodiment of the present invention; fig. 10(a) is a schematic diagram of a binary edge image corresponding to a first scene multispectral aerial image provided in the first embodiment of the present invention, and fig. 10(b) is a schematic diagram of a binary edge image corresponding to a second scene multispectral aerial image provided in the second embodiment of the present invention;
FIG. 11 is a schematic view of the 8-way communication provided by the present invention;
fig. 12 is a schematic diagram illustrating a road traffic sign extraction result in a multispectral aerial image according to an embodiment of the present invention; fig. 12(a) is a schematic diagram of a road traffic sign in a first extracted scene multispectral aerial image according to a first embodiment of the present invention, and fig. 12(b) is a schematic diagram of a road traffic sign in a second extracted scene multispectral aerial image according to a second embodiment of the present invention;
fig. 13 is a schematic diagram illustrating a positioning result of a road traffic sign in a multispectral aerial image according to an embodiment of the present invention; fig. 13(a) is a schematic diagram of a positioning result of a road traffic sign in a first scene multispectral aerial image extracted in the first embodiment of the present invention, and fig. 13(b) is a schematic diagram of a positioning result of a road traffic sign in a second scene multispectral aerial image extracted in the second embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The remote sensing image cannot finely extract the road traffic sign due to the influence of the spatial resolution, and the aerial image is convenient to shoot and has higher resolution, so that the remote sensing image can be directly used for extracting and positioning the road traffic sign. Therefore, the invention provides an extraction method and system for the traffic signs on the road surface of the aerial image, which are used for effectively extracting and positioning the traffic signs on the road surface in the aerial image based on a minimum and maximum object deletion method and improving the extraction precision of the traffic signs on the road surface.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is an overall concept diagram of the method for extracting an aerial image road traffic sign provided by the invention. Because the extraction of the road traffic sign is easily influenced by the color space of the image, the edge structure of the object, the contrast, the gradient and the like, a great deal of processing work is firstly carried out on the original image when the road sign is extracted. Referring to fig. 1, the present invention first converts an input multispectral aerial Image into a Gray Image (GI); then, performing enhancement processing on the GI by Gabor and two-dimensional convolution to obtain an Enhanced Image (EI), and performing edge extraction on the EI by using a Sobel operator to obtain a Gray Edge Image (GEI); then, segmenting the GEI by using a threshold value to obtain a Binary Edge Image (BEI); and finally, deleting the minimum object and the maximum object of the BEI by a minimum and maximum object deletion method to obtain a final target object and positioning the final target object.
Fig. 2 is a specific flowchart of the method for extracting an aerial image road traffic sign according to the present invention. Referring to fig. 2, the method for extracting an aerial image road traffic sign specifically includes:
step 201: and acquiring a multispectral aerial image containing the road traffic sign.
Fig. 3 is a schematic view of a multispectral aerial image provided by an embodiment of the present invention, in which fig. 3(a) is a schematic view of a first multispectral aerial image provided by a first embodiment of the present invention, and fig. 3(b) is a schematic view of a second multispectral aerial image provided by a second embodiment of the present invention. As shown in fig. 3, the multispectral aerial image is an RGB image.
Step 202: and converting the multispectral aerial image into a gray level image.
The conversion from RGB aerial images to grayscale images is actually achieved by eliminating the original image hue and saturation while preserving the brightness. The R, G, B three channels are used as axes to establish a rectangular coordinate system of color space, the color of each pixel in the image can be represented by a point in the three-dimensional space, and the color point in the gray scale image is represented by a point on a straight line of R, G and B. RGB values can be converted into grey values by mapping points of the three-dimensional color space into a one-dimensional space, i.e. by making a perpendicular to a point of the RGB space to R-G-B, and by calculating R, G a weighted sum of the B components. The calculation formula is as follows:
GI(x,y)=0.299×R+0.587×G+0.114×B (1)
in the formula (1), R, G, B are R, G, B three-channel color values of the multispectral aerial image respectively; GI (x, y) is the pixel brightness value at (x, y), GI (x, y) is more than or equal to 0 and less than or equal to 255, and x and y represent the horizontal and vertical coordinates of the pixel point (x, y).
Fig. 4(a) and 4(b) show grayscale images generated by performing grayscale conversion on the first scene multispectral aerial image and the second scene multispectral aerial image, respectively.
Fig. 5 is a histogram of a gray-scale image according to an embodiment of the present invention, where the abscissa of the histogram is the row and the column of the image, and the ordinate is the gray-scale value. Fig. 5(a) is a grayscale image histogram of a first scene multispectral aerial image according to a first embodiment of the present invention, and fig. 5(b) is a grayscale image histogram of a second scene multispectral aerial image according to a second embodiment of the present invention. The gray image can effectively reduce the color space of the image, and is indispensable in image processingCritical step. As can be seen from the analysis of FIGS. 5(a) and 5(b), x1∈(175,230),x2E (72,112), where x1Is the abscissa, x, of FIG. 5(a)2The abscissa of fig. 5 (b).
Step 203: and adopting Gabor filtering and two-dimensional convolution operation to enhance the gray level image to generate an enhanced image.
(1) Gabor filter
The principle of the Gabor filter is a windowed fourier transform, whose frequency and direction expression is very similar to the human visual system. However, the Fourier transform has the problem of mixing the frequency characteristics of the image at different positions, but the Gabor filter can well overcome the problem, can evacuate the local frequency characteristics of the image space, is a good texture detection method, and can effectively detect the edge information of the image.
The two-dimensional Gabor filter adopted by the invention is formed by superposing a trigonometric function (such as a sine function) and a Gaussian function, and the formula is defined as follows:
complex representation of the two-dimensional Gabor filter:
Figure BDA0002159821670000101
wherein the content of the first and second substances,
x′=xcosθ+ysinθ (3)
y′=-xsinθ+ycosθ (4)
in the formula: x and y represent horizontal and vertical coordinates of the pixel point (x and y); λ represents the wavelength of the Gabor filter, and takes 60; theta represents the direction angle of the Gabor filter and takes 90 degrees; psi denotes the phase offset of the Gabor filter, and takes the value 0; sigma represents the standard deviation of the Gaussian function and takes the value of 0.3; gamma represents the width ratio and takes 1; i is an imaginary unit. A Gabor filter characteristic diagram corresponding to the grayscale image can be obtained through calculation according to formula (2), and the generated characteristic diagram is shown in fig. 6, where fig. 6(a) is a schematic diagram of the size of the Gabor filter, and fig. 6(b) is a schematic diagram of the real part of the Gabor filter.
(2) Two-dimensional convolution operation
The two-dimensional convolution operation is beneficial to extracting information such as image textures, and Gaussian blur, edge detection and morphological operation can be realized through the convolution operation. For gray scale image GIxy(1. ltoreq. x.ltoreq.M, 1. ltoreq. y.ltoreq.N) and a Gabor filter characteristic map Gaborxy(x, y; λ, θ, ψ, σ, γ), (1 ≦ x ≦ m,1 ≦ y ≦ n) }, a calculation formula of the two-dimensional convolution operation is as follows:
Figure BDA0002159821670000111
in the formula: EI (x, y) is an enhanced image; m, N represents the total number of rows and columns of the image, M and N represent the total number of rows and columns of the filter, M < M and N < N in general.
In the embodiment of the present invention, after the gray scale image GI (x, y) is enhanced by Gabor filtering and two-dimensional convolution operation, an enhanced image EI (x, y) is shown in fig. 7, where fig. 7(a) is an enhanced image corresponding to the first scene multispectral aerial image provided in the first embodiment of the present invention, and fig. 7(b) is an enhanced image corresponding to the second scene multispectral aerial image provided in the second embodiment of the present invention. Comparing fig. 4 and fig. 7, it can be seen that the arrow part after Gabor and two-dimensional convolution operation is obviously enhanced compared with the obtained gray scale image.
Step 204: and carrying out edge detection on the enhanced image by adopting a Sobel edge detection operator to obtain a gray edge image.
The structure of the road traffic sign is single, and the information of non-sign ground objects on two sides of the road is complex. Therefore, the object of the present invention is to highlight the mark region, blur the non-mark region, and connect most of the edges of the non-mark region, and the Sobel operator provides the possibility for this idea. Fig. 8 is a schematic diagram of a Sobel operation template provided by the present invention, where fig. 8(a) is a schematic diagram of a horizontal edge of a pixel template provided by the present invention, and fig. 8(b) is a schematic diagram of a vertical edge of the pixel template provided by the present invention. As shown in fig. 8, the convolution template of Sobel operator has two templates, one for detecting horizontal edges and one for detecting vertical edges.
And (3) performing x component, y component and composite gradient calculation on the enhanced image EI (x, y) by adopting a Sobel edge detection operator:
Gx=EIx(x,y)=EI(x-1,y+1)+2EI(x,y+1)+EI(x+1,y+1)-(EI(x-1,y-1)+2EI(x,y-1)+EI(x+1,y-1)) (6)
Gy=EIy(x,y)=EI(x+1,y-1)+2EI(x+1,y)+EI(x+1,y+1)-(EI(x-1,y-1)+2EI(x-1,y)+EI(x-1,y+1)) (7)
Figure BDA0002159821670000121
in the formula, GxFor synthesizing a gradient x-component, equal to the x-component EI of the enhanced image EI (x, y)x(x,y);GyFor synthesizing a gradient y-component, equal to the y-component EI of the enhanced image EI (x, y)y(x, y); g is the resultant gradient. The calculated composite gradient G is based on the gradient of the pixels, and the gradient values of all the pixels can be calculated by moving the two templates on the image, i.e., the gray edge image GEI (x, y) is obtained, as shown in fig. 9. Fig. 9(a) is a grayscale edge image corresponding to a first scene multispectral aerial image provided in the first embodiment of the present invention, and fig. 9(b) is a grayscale edge image corresponding to a second scene multispectral aerial image provided in the second embodiment of the present invention.
Step 205: and segmenting the gray scale edge image by using a threshold segmentation method to obtain a binary edge image.
Obtaining a Sobel edge detection result image GEI (x, y) of the enhanced image EI (x, y) through operation, and then converting the image GEI (x, y) into a binary edge image BEI (x, y) by using a set image segmentation threshold (the image segmentation threshold T of the invention is 0.5156), wherein the formula is as follows:
Figure BDA0002159821670000122
the image segmentation result obtained by the embodiment of the invention is shown in fig. 10. Fig. 10(a) is a binary edge image corresponding to a first scene multispectral aerial image provided in the first embodiment of the present invention, and fig. 10(b) is a binary edge image corresponding to a second scene multispectral aerial image provided in the second embodiment of the present invention. As can be seen from fig. 9(a) and 9(b), after Sobel operation, the edge of the mark region is well detected, and the non-mark region is obviously detected as a whole piece, which is very critical for extracting the subsequent traffic mark. And the improved Sobel operator is used, so that the detection of the peripheral area is too fine, and the experimental result is influenced. Fig. 10(a) and 10(b) are binary edge images BEI (x, y) obtained by threshold segmentation, and it can be seen that the road traffic sign outline has been accurately segmented.
Step 206: and deleting the minimum object and the maximum object of the binary edge image by adopting a minimum and maximum object deletion method, and extracting the road traffic sign in the multispectral aerial image.
According to the road traffic sign standard of 'urban road traffic sign and marking line setting specification GB 51038-2015' issued by China, the traffic sign sprayed on the road surface has certain size requirements. Knowing the shooting height and the scale of the multispectral aerial image data, the theoretical size of the mark in the image can be calculated. And setting a minimum object deletion threshold value P according to the theoretical size of the mark in the image, wherein the value of P is 300 pixels.
According to the method, firstly, a small-area object is deleted through a bweareaopen function (an image processing function in a matlab function), the small object is deleted in an 8-neighborhood region, and an object with the area smaller than P in the binary edge image is deleted (the value of P is 300 pixels).
When the maximum object is deleted, firstly determining the sibling objects of the binary edge image through 8 communication, and storing the sibling objects in a list through searching; and then calculating the area of the stored sibling objects from the list, storing the objects smaller than the maximum object deletion threshold Q and deleting the objects larger than Q (the Q value of the invention is 1000 pixels), and extracting the road traffic sign in the multispectral aerial image. The communication diagram and the calculation rule of 8 are respectively shown in fig. 11 and formula (10):
N8(p)=N4∪(x+1,y+1),(x+1,y-1),(x-1,y+1),(x-1,y-1) (10)
in the formula: x and y represent pixel coordinates; n4 represents 4 connectivity; n8(P) indicates 8 connectivity. Connecting pixels if edges or corners of the pixels in the image meet; two adjacent pixels belong to one object if they are connected in the horizontal, vertical or diagonal direction and along the diagonal direction, i.e. are objects of the same genus as each other.
The final result of the extraction of the road traffic signs in the multispectral aerial image is shown in fig. 12. Fig. 12(a) is a schematic diagram of a road traffic sign in a first multispectral aerial image extracted according to a first embodiment of the present invention, and fig. 12(b) is a schematic diagram of a road traffic sign in a second multispectral aerial image extracted according to a second embodiment of the present invention. As can be seen from fig. 12, the arrow mark in 12(a) is completely extracted, while the arrow mark in 12(b) is extracted less optimally, but accurately. This is because the interference information around the arrow in fig. 12(a) is relatively small, and the interference of the small road surface information in fig. 12(b) causes the extraction of the arrow mark by detecting a small amount of non-mark information together, which results in the extraction of the mark being not ideal. It can be seen that the shaded area still has some influence on the arrow mark extraction.
Furthermore, the extracted road traffic signs can be positioned by adopting the minimum rectangle of the sign area. The position of the traffic sign can be quickly locked through positioning, and the damaged condition of the traffic sign can be checked, so that a large amount of manpower, material resources and financial resources can be saved. According to the extracted road traffic sign result graph, the minimum rectangle containing the sign area is calculated by acquiring the attribute of the image area, so that the target object is quickly positioned. The target positioning result provided by the embodiment of the invention is shown in fig. 13. Fig. 13(a) is a schematic diagram of a positioning result of a road traffic sign in a first scene multispectral aerial image extracted in the first embodiment of the present invention, and fig. 13(b) is a schematic diagram of a positioning result of a road traffic sign in a second scene multispectral aerial image extracted in the second embodiment of the present invention.
Currently, the detection and identification of traffic signs are mainly realized by acquiring road information in front of a vehicle through a vehicle-mounted camera and then detecting the road information through different methods. However, the difference exists between the aerial image and the vehicle-mounted camera in the imaging and data acquisition modes, so that the existing method is difficult to be directly used for extracting the aerial image road sign. Therefore, the invention provides an aviation image road traffic sign extraction method based on minimum and maximum object deletion. The invention extracts the mark object by using the minimum and maximum object deletion method and evaluates the mark extraction quality by using the false detection rate, the missed detection rate and the overall precision. The method of the invention adopts the following platform for experimental verification: the system is win10, the CPU is i7-9700k, the display card is GTX1070Ti, the memory of 16G, and the software is matlab2018 b. The specific accuracy evaluation results are shown in table 1:
TABLE 1 results of precision evaluation
False rate of detection Rate of missed examination Overall accuracy Total consumption time(s)
First scene image 1.06% 0.23% 99.76% 1.3153
Second scene image 1.07% 0.63% 99.36% 1.2220
As can be seen from the data in table 1, the false detection rate and the missed detection rate are both low in the whole pavement marker extraction process, and the overall accuracy respectively reaches 99.76% and 99.36%. Meanwhile, the total time consumption of the two groups of experiments is also lower, which shows the effectiveness of the method in extracting the traffic sign of the aerial image data, and the method has higher robustness and higher application value in the whole process.
Based on the aviation image road traffic sign extraction method provided by the invention, the invention also provides an aviation image road traffic sign extraction system, which comprises the following steps:
the original image acquisition module is used for acquiring a multispectral aerial image containing a road traffic sign; the multispectral aerial image is an RGB image;
the image conversion module is used for converting the multispectral aerial image into a gray level image;
the image enhancement module is used for enhancing the gray level image by adopting Gabor filtering and two-dimensional convolution operation to generate an enhanced image;
the edge detection module is used for carrying out edge detection on the enhanced image by adopting a Sobel edge detection operator to obtain a gray edge image;
the image segmentation module is used for segmenting the gray edge image by utilizing a threshold segmentation method to obtain a binary edge image;
and the minimum and maximum object deleting module is used for deleting the minimum object and the maximum object of the binary edge image by adopting a minimum and maximum object deleting method and extracting the road traffic sign in the multispectral aerial image.
The image conversion module specifically comprises:
the image conversion unit is used for converting the multispectral aerial image into a gray image by adopting a formula GI (x, y) ═ 0.299 xR +0.587 xG +0.114 xB; r, G, B are color values of three channels of the multispectral aerial image R, G, B respectively; GI (x, y) is the pixel brightness value at (x, y), GI (x, y) is more than or equal to 0 and less than or equal to 255, and x and y represent the horizontal and vertical coordinates of the pixel point.
The image enhancement module specifically comprises:
a Gabor filter unit for using two-dimensional Gabor filter
Figure BDA0002159821670000151
Carrying out Gabor filtering processing on the gray level image to generate a Gabor filter characteristic diagram Gabor (x, y; lambda, theta, psi, sigma, gamma); wherein x and y represent the horizontal and vertical coordinates of the pixel point (x and y); λ represents the wavelength of the Gabor filter; θ represents the azimuth of the Gabor filter; ψ denotes a phase shift of the Gabor filter; σ represents the standard deviation of the Gaussian function; γ represents a width ratio; x' ═ xcos θ + ysin θ; y' ═ xsin θ + ycos θ; i is an imaginary unit;
a two-dimensional convolution operation unit for adopting a formula
Figure BDA0002159821670000152
For the gray image GIxy(1. ltoreq. x.ltoreq.M, 1. ltoreq. y.ltoreq.N) and the Gabor filter characteristic diagram GaborxyPerforming two-dimensional convolution operation on { Gabor (x, y; lambda, theta, psi, sigma, gamma), (1 ≦ x ≦ m,1 ≦ y ≦ n) } to generate an enhanced image EI (x, y); m, N, wherein each represents the total number of rows and columns of the grayscale image; m and n respectively represent the total row and column numbers of the Gabor filter.
The edge detection module specifically comprises:
x component calculation unit for employing formula Gx=EIx(x, y) ═ EI (x-1, y +1) +2EI (x, y +1) + EI (x +1, y +1) - (EI (x-1, y-1) +2EI (x, y-1) + EI (x +1, y-1)) the enhanced image EI (x, y) is subjected to x-component calculation to generate a composite gradient x-component Gx(ii) a Wherein EIy(x, y) represents the x-component of the enhanced image EI (x, y);
y component calculation unit for employing formula Gy=EIy(x, y) ═ EI (x +1, y-1) +2EI (x +1, y) + EI (x +1, y +1) - (EI (x-1, y-1) +2EI (x-1, y) + EI (x-1, y +1)) to the enhancedThe y component of the image EI (x, y) is calculated to generate a composite gradient y component Gy(ii) a Wherein EIy(x, y) represents the y-component of the enhanced image EI (x, y);
a synthetic gradient calculation unit for employing a formula
Figure BDA0002159821670000161
For the resultant gradient x component GxAnd the resultant gradient y-component GyPerforming synthetic gradient calculation to generate a gray edge image GEI (x, y); wherein G is the synthetic gradient.
The image segmentation module specifically comprises:
an image segmentation unit for employing a formula
Figure BDA0002159821670000162
Segmenting the gray scale edge image, and converting the gray scale edge image GEI (x, y) into a binary edge image BEI (x, y); where T is the image segmentation threshold.
The minimum and maximum object deletion module specifically includes:
a minimum object deleting unit, configured to delete an object whose area is smaller than a minimum object deletion threshold in the binary edge image, and generate a minimum object-deleted image;
an 8-connected computing unit, configured to determine a sibling object of the binary edge image through 8-connected;
and the maximum object deleting unit is used for deleting the objects with the area larger than the maximum object deleting threshold value in the sibling objects and extracting the road traffic signs in the multispectral aerial image.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (6)

1. An aviation image road traffic sign extraction method is characterized by comprising the following steps:
acquiring a multispectral aerial image containing a road traffic sign; the multispectral aerial image is an RGB image;
converting the multispectral aerial image into a gray level image, specifically comprising:
converting the multispectral aerial image into a gray image by adopting a formula GI (x, y) ═ 0.299 xR +0.587 xG +0.114 xB; r, G, B are color values of three channels of the multispectral aerial image R, G, B respectively; GI (x, y) is the pixel brightness value at (x, y), GI (x, y) is more than or equal to 0 and less than or equal to 255, and x and y represent the horizontal and vertical coordinates of the pixel points;
adopting Gabor filtering and two-dimensional convolution operation to enhance the gray level image, and generating an enhanced image, which specifically comprises:
using two-dimensional Gabor filters
Figure FDA0002451345190000011
Carrying out Gabor filtering processing on the gray level image to generate a Gabor filter characteristic diagram Gabor (x, y; lambda, theta, psi, sigma and gamma); wherein x and y represent the horizontal and vertical coordinates of the pixel point (x and y); λ represents the wavelength of the Gabor filter; θ represents the azimuth of the Gabor filter; ψ denotes a phase shift of the Gabor filter; σ represents the standard deviation of the Gaussian function; γ represents a width ratio; x' x cos θ + y sin θ; y' ═ xsin θ + y cos θ; i is an imaginary unit;
using a formula
Figure FDA0002451345190000012
For the gray image GIxy={GI (x, y), (1 ≦ x ≦ M,1 ≦ y ≦ N) } and the Gabor filter feature map GaborxyPerforming two-dimensional convolution operation on { Gabor (x, y; lambda, theta, psi, sigma, gamma), (1 ≦ x ≦ m,1 ≦ y ≦ n) } to generate an enhanced image EI (x, y); m, N, wherein each represents the total number of rows and columns of the grayscale image; m and n respectively represent the total row number and the column number of the Gabor filter;
adopting a Sobel edge detection operator to carry out edge detection on the enhanced image to obtain a gray edge image;
segmenting the gray scale edge image by using a threshold segmentation method to obtain a binary edge image;
and deleting the minimum object and the maximum object of the binary edge image by adopting a minimum and maximum object deletion method, and extracting the road traffic sign in the multispectral aerial image.
2. The method for extracting an aerial image road surface traffic sign according to claim 1, wherein the edge detection is performed on the enhanced image by using a Sobel edge detection operator to obtain a gray edge image, and the method specifically comprises the following steps:
using formula Gx=EIx(x, y) ═ EI (x-1, y +1) +2EI (x, y +1) + EI (x +1, y +1) - (EI (x-1, y-1) +2EI (x, y-1) + EI (x +1, y-1)) the enhanced image EI (x, y) is subjected to x-component calculation to generate a composite gradient x-component Gx(ii) a Wherein EIx(x, y) represents the x-component of the enhanced image EI (x, y);
using formula Gy=EIy(x, y) ═ EI (x +1, y-1) +2EI (x +1, y) + EI (x +1, y +1) - (EI (x-1, y-1) +2EI (x-1, y) + EI (x-1, y +1)) the enhanced image EI (x, y) is subjected to y component calculation to generate a composite gradient y component Gy(ii) a Wherein EIy(x, y) represents the y-component of the enhanced image EI (x, y);
using a formula
Figure FDA0002451345190000021
For the resultant gradient x component GxAnd the resultant gradient y-component GyPerforming synthetic gradient calculation to generate a gray edge image GEI (x, y); whereinG is the resultant gradient.
3. The method for extracting an aerial image road surface traffic sign according to claim 2, wherein the segmenting the gray-scale edge image by using a threshold segmentation method to obtain a binary edge image specifically comprises:
using a formula
Figure FDA0002451345190000022
Segmenting the gray scale edge image, and converting the gray scale edge image GEI (x, y) into a binary edge image BEI (x, y); where T is the image segmentation threshold.
4. The method for extracting a road traffic sign according to claim 3, wherein the extracting the road traffic sign in the multispectral aerial image by performing minimum object and maximum object deletion on the binary edge image by using a minimum and maximum object deletion method specifically comprises:
deleting the object with the area smaller than the minimum object deletion threshold value in the binary edge image, and generating a minimum object deleted image;
determining the sibling objects of the binary edge image through 8 communication;
and deleting the objects with the area larger than the maximum object deletion threshold value in the sibling objects, and extracting the road traffic signs in the multispectral aerial image.
5. An aerial image pavement traffic sign extraction system, the system comprising:
the original image acquisition module is used for acquiring a multispectral aerial image containing a road traffic sign; the multispectral aerial image is an RGB image;
the image conversion module is used for converting the multispectral aerial image into a gray level image;
the image conversion module specifically comprises:
the image conversion unit is used for converting the multispectral aerial image into a gray image by adopting a formula GI (x, y) ═ 0.299 xR +0.587 xG +0.114 xB; r, G, B are color values of three channels of the multispectral aerial image R, G, B respectively; GI (x, y) is the pixel brightness value at (x, y), GI (x, y) is more than or equal to 0 and less than or equal to 255, and x and y represent the horizontal and vertical coordinates of the pixel points;
the image enhancement module is used for enhancing the gray level image by adopting Gabor filtering and two-dimensional convolution operation to generate an enhanced image;
the image enhancement module specifically comprises:
a Gabor filter unit for using two-dimensional Gabor filter
Figure FDA0002451345190000031
Carrying out Gabor filtering processing on the gray level image to generate a Gabor filter characteristic diagram Gabor (x, y; lambda, theta, psi, sigma and gamma); wherein x and y represent the horizontal and vertical coordinates of the pixel point (x and y); λ represents the wavelength of the Gabor filter; θ represents the azimuth of the Gabor filter; ψ denotes a phase shift of the Gabor filter; σ represents the standard deviation of the Gaussian function; γ represents a width ratio; x' x cos θ + y sin θ; y' ═ xsin θ + y cos θ; i is an imaginary unit;
a two-dimensional convolution operation unit for adopting a formula
Figure FDA0002451345190000032
For the gray image GIxy(1. ltoreq. x.ltoreq.M, 1. ltoreq. y.ltoreq.N) and the Gabor filter characteristic diagram GaborxyPerforming two-dimensional convolution operation on { Gabor (x, y; lambda, theta, psi, sigma, gamma), (1 ≦ x ≦ m,1 ≦ y ≦ n) } to generate an enhanced image EI (x, y); m, N, wherein each represents the total number of rows and columns of the grayscale image; m and n respectively represent the total row number and the column number of the Gabor filter;
the edge detection module is used for carrying out edge detection on the enhanced image by adopting a Sobel edge detection operator to obtain a gray edge image;
the image segmentation module is used for segmenting the gray edge image by utilizing a threshold segmentation method to obtain a binary edge image;
and the minimum and maximum object deleting module is used for deleting the minimum object and the maximum object of the binary edge image by adopting a minimum and maximum object deleting method and extracting the road traffic sign in the multispectral aerial image.
6. The system for extracting an aerial image road surface traffic sign according to claim 5, wherein the edge detection module specifically comprises:
x component calculation unit for employing formula Gx=EIx(x, y) ═ EI (x-1, y +1) +2EI (x, y +1) + EI (x +1, y +1) - (EI (x-1, y-1) +2EI (x, y-1) + EI (x +1, y-1)) the enhanced image EI (x, y) is subjected to x-component calculation to generate a composite gradient x-component Gx(ii) a Wherein EIx(x, y) represents the x-component of the enhanced image EI (x, y);
y component calculation unit for employing formula Gy=EIy(x, y) ═ EI (x +1, y-1) +2EI (x +1, y) + EI (x +1, y +1) - (EI (x-1, y-1) +2EI (x-1, y) + EI (x-1, y +1)) the enhanced image EI (x, y) is subjected to y component calculation to generate a composite gradient y component Gy(ii) a Wherein EIy(x, y) represents the y-component of the enhanced image EI (x, y);
a synthetic gradient calculation unit for employing a formula
Figure FDA0002451345190000041
For the resultant gradient x component GxAnd the resultant gradient y-component GyPerforming synthetic gradient calculation to generate a gray edge image GEI (x, y); wherein G is the synthetic gradient.
CN201910728779.1A 2019-08-08 2019-08-08 Method and system for extracting traffic signs on aviation image road surface Active CN110427902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910728779.1A CN110427902B (en) 2019-08-08 2019-08-08 Method and system for extracting traffic signs on aviation image road surface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910728779.1A CN110427902B (en) 2019-08-08 2019-08-08 Method and system for extracting traffic signs on aviation image road surface

Publications (2)

Publication Number Publication Date
CN110427902A CN110427902A (en) 2019-11-08
CN110427902B true CN110427902B (en) 2020-06-12

Family

ID=68414965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910728779.1A Active CN110427902B (en) 2019-08-08 2019-08-08 Method and system for extracting traffic signs on aviation image road surface

Country Status (1)

Country Link
CN (1) CN110427902B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942546A (en) * 2014-05-08 2014-07-23 奇瑞汽车股份有限公司 Guide traffic marking identification system and method in municipal environment
CN106441319A (en) * 2016-09-23 2017-02-22 中国科学院合肥物质科学研究院 System and method for generating lane-level navigation map of unmanned vehicle
CN108921943A (en) * 2018-06-29 2018-11-30 广东星舆科技有限公司 A kind of road threedimensional model modeling method based on lane grade high-precision map
CN109658504A (en) * 2018-10-31 2019-04-19 百度在线网络技术(北京)有限公司 Map datum mask method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942546A (en) * 2014-05-08 2014-07-23 奇瑞汽车股份有限公司 Guide traffic marking identification system and method in municipal environment
CN106441319A (en) * 2016-09-23 2017-02-22 中国科学院合肥物质科学研究院 System and method for generating lane-level navigation map of unmanned vehicle
CN108921943A (en) * 2018-06-29 2018-11-30 广东星舆科技有限公司 A kind of road threedimensional model modeling method based on lane grade high-precision map
CN109658504A (en) * 2018-10-31 2019-04-19 百度在线网络技术(北京)有限公司 Map datum mask method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN110427902A (en) 2019-11-08

Similar Documents

Publication Publication Date Title
Wu et al. A practical system for road marking detection and recognition
Liu et al. Building extraction from high resolution imagery based on multi-scale object oriented classification and probabilistic Hough transform
CN112819094B (en) Target detection and identification method based on structural similarity measurement
CN104463877B (en) A kind of water front method for registering based on radar image Yu electronic chart information
CN107563377A (en) It is a kind of to detect localization method using the certificate key area of edge and character area
CN102147867B (en) Method for identifying traditional Chinese painting images and calligraphy images based on subject
CN103544484A (en) Traffic sign identification method and system based on SURF
CN105069466A (en) Pedestrian clothing color identification method based on digital image processing
Wang et al. A vision-based road edge detection algorithm
CN106682641A (en) Pedestrian identification method based on image with FHOG- LBPH feature
Meng et al. Text detection in natural scenes with salient region
Deng et al. Detection and recognition of traffic planar objects using colorized laser scan and perspective distortion rectification
CN111160328A (en) Automatic traffic marking extraction method based on semantic segmentation technology
CN105139011A (en) Method and apparatus for identifying vehicle based on identification marker image
Wei et al. Detection of lane line based on Robert operator
CN109711420B (en) Multi-affine target detection and identification method based on human visual attention mechanism
CN104063682A (en) Pedestrian detection method based on edge grading and CENTRIST characteristic
CN115984806B (en) Dynamic detection system for road marking damage
CN110427902B (en) Method and system for extracting traffic signs on aviation image road surface
Huang Research on license plate image segmentation and intelligent character recognition
CN111476233A (en) License plate number positioning method and device
CN106355576A (en) SAR image registration method based on MRF image segmentation algorithm
Yu et al. High-Precision pixelwise SAR–optical image registration via flow fusion estimation based on an attention mechanism
Song et al. Color-based traffic sign detection
Ding et al. A comprehensive approach for road marking detection and recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant