CN112071078A - Traffic engineering environment intelligent detection system - Google Patents

Traffic engineering environment intelligent detection system Download PDF

Info

Publication number
CN112071078A
CN112071078A CN202010905521.7A CN202010905521A CN112071078A CN 112071078 A CN112071078 A CN 112071078A CN 202010905521 A CN202010905521 A CN 202010905521A CN 112071078 A CN112071078 A CN 112071078A
Authority
CN
China
Prior art keywords
identification
unmanned aerial
aerial vehicle
engineering environment
traffic engineering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010905521.7A
Other languages
Chinese (zh)
Other versions
CN112071078B (en
Inventor
杨建国
孟宇
李晓霞
吴关
邓仁琼
靳晓清
王雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cats Testing Technology Beijing Co ltd
Jiaokeyuan Science And Technology Group Co ltd
Original Assignee
Cats Testing Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cats Testing Technology Beijing Co ltd filed Critical Cats Testing Technology Beijing Co ltd
Priority to CN202010905521.7A priority Critical patent/CN112071078B/en
Publication of CN112071078A publication Critical patent/CN112071078A/en
Application granted granted Critical
Publication of CN112071078B publication Critical patent/CN112071078B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/09Recognition of logos

Abstract

The invention discloses an intelligent detection system for traffic engineering environment, which comprises an unmanned aerial vehicle (1), a navigation unit (2), a camera unit (3) and an identification detection unit (4), and also discloses an intelligent detection method for traffic engineering environment. According to the traffic engineering environment intelligent detection system and method disclosed by the invention, the unmanned aerial vehicle is used for shooting images, the identification accuracy and the identification speed are improved by utilizing the imaging brightness difference of the traffic sign reverse reflector and the background bottom plate, and the shaking amount is used as the input condition of the model, so that the identification speed and the identification accuracy are improved.

Description

Traffic engineering environment intelligent detection system
Technical Field
The invention relates to an intelligent traffic engineering environment detection system, and belongs to the field of traffic.
Background
The road traffic sign is a graphic symbol for displaying traffic laws and regulations and road information, can make the traffic laws and regulations obtain an image, concrete and concise expression, and at the same time, also expresses the contents which are difficult to describe by characters, and is used for managing traffic and indicating driving direction to ensure the facilities of road smoothness and driving safety.
With the vigorous development of the strong traffic countries, the number of the traffic signs on the roads is more and more, the traffic signs need to be regularly detected to find damaged traffic signs for further maintenance, and the traffic signs also need to be detected when the new roads are completed.
Traditional detection often relies on manual inspection, and is inefficient.
In the prior art, a method for recognizing and detecting the pictures or videos of the traffic sign boards along the way by using a special vehicle to erect a vehicle-mounted camera device is also provided, but a large amount of personnel are required to participate in driving work, road resources are occupied, and the hidden danger of traffic risks is caused.
In addition, when the artificial intelligence identification is carried out by taking pictures through the lens in the prior art, the identification is carried out on the basis of the pictures, the image background is complex, the data processing amount is large, the identification speed is low, the use requirement can not be well met, and the low-accuracy artificial intelligence identification is caused by the influence of different light environments.
In the prior art, a method for reducing data processing amount by a method of carrying out gray level and binarization on an image and completing traffic sign identification is also available, but the method can lose a large amount of details in the image and cause low identification rate.
In the prior art, the image blurring caused by shaking is generally reduced by directly performing noise reduction treatment on the image, or the blurring is not considered at all, and the image blurring is directly recognized, so that the recognition precision is insufficient, and when a vehicle shakes obviously on a section with uneven potholes or a shot picture of a lens with abnormal vehicle power shakes obviously, the recognition speed is slowed, the recognition degree is further reduced, and even the image blurring cannot be recognized.
Therefore, there is a need to design an intelligent traffic engineering environment detection system with low light interference, high identification precision and no road resource occupation.
Disclosure of Invention
In order to overcome the above problems, the present inventors have conducted intensive research, and on one hand, designed an intelligent traffic engineering environment detection system, which includes an unmanned aerial vehicle 1, a navigation unit 2, a camera unit 3, and an identification detection unit 4.
The navigation unit 2, the camera unit 3 and the identification detection unit 4 are mounted on the unmanned aerial vehicle 1, an unmanned aerial vehicle flight path is preset in the navigation unit 2, and the path is set according to an actual path of a road, so that the unmanned aerial vehicle can fly along the road;
the camera unit 3 is used for shooting images and transmitting the images to the recognition detection unit 4;
the recognition detection unit 4 is provided with an image recognition module 41, and the image recognition module 41 analyzes and recognizes the image shot by the image shooting unit 3 to obtain a recognition result.
Further, a flash 31 and an illumination sensor 32 are provided on the imaging unit 3.
Preferably, the illumination sensor 32 is arranged on the top of the unmanned aerial vehicle, and the detection surface of the illumination sensor 32 is opposite to the direction of the lens shooting surface of the camera unit 3.
More preferably, the flash lamp 31 is a xenon flash lamp with a spectrum peak value within a range of 600-900 nm.
In a preferred embodiment, a high-pass filter is provided on flash lamp 31.
In a preferred embodiment, the camera unit 3 comprises two cameras, a visible light camera and a low-light black and white camera.
According to the invention, an image recognition model is provided in the recognition module 41, preferably a neural network model, more preferably a CNN model.
On the other hand, the invention also provides an intelligent detection method for traffic engineering environment, which carries out shooting, identification and detection along a road through the unmanned aerial vehicle carrying camera unit to determine whether the name and the position of the road traffic sign are consistent with the construction drawing of the road traffic sign.
Specifically, the method comprises the following steps:
s1, the unmanned aerial vehicle flies along the road to preliminarily identify the traffic sign board;
s2, accurately identifying the traffic sign board;
and S3, comparing the identification result with the construction drawing to obtain a detection result.
The traffic engineering environment intelligent detection system and the traffic engineering environment intelligent detection method have the beneficial effects that:
(1) according to the traffic engineering environment intelligent detection system and method provided by the invention, the identification accuracy and identification speed are improved by utilizing the imaging brightness difference between the traffic sign board counter reflector and the background bottom plate;
(2) according to the traffic engineering environment intelligent detection system and method provided by the invention, the identification accuracy is improved by filtering the spectrum peak value of the flash lamp;
(3) according to the traffic engineering environment intelligent detection system and method provided by the invention, the two cameras are used for shooting, so that the identification accuracy is improved;
(4) according to the traffic engineering environment intelligent detection system and method provided by the invention, the shaking amount is used as the input condition of the model, so that the identification speed and the identification accuracy are improved;
(5) according to the traffic engineering environment intelligent detection system and method provided by the invention, the image identification process is divided into the primary judgment and accurate identification processes, so that the calculated amount of the identification process is greatly reduced, and the identification speed is improved.
Drawings
FIG. 1 shows a schematic diagram of an intelligent traffic engineering environment detection system in a preferred embodiment;
fig. 2 is a flow diagram illustrating an intelligent traffic engineering environment detection method according to a preferred embodiment.
Reference numerals
1-unmanned aerial vehicle;
2-a navigation unit;
3-a camera unit;
4-identifying a detection unit;
31-a flash lamp;
32-an illumination sensor;
41-an identification module;
42-shake detection module;
43-sign board integrity detection module;
411-coarse identification submodule;
412 — precise identification submodule.
Detailed Description
The invention is explained in further detail below with reference to the drawing. The features and advantages of the present invention will become more apparent from the description.
On one hand, the invention provides an intelligent traffic engineering environment detection system for identifying and detecting traffic signboards, which comprises an unmanned aerial vehicle 1, a navigation unit 2, a camera unit 3 and an identification and detection unit 4.
According to the invention, the navigation unit 2, the camera unit 3 and the identification detection unit 4 are mounted on the unmanned aerial vehicle 1, and the unmanned aerial vehicle flight path is preset in the navigation unit 2 and is set according to the actual path of the road, so that the unmanned aerial vehicle can fly along the road.
The image pickup unit 3 is used for taking an image and transferring the image to the recognition detection unit 4.
The recognition detection unit 4 is provided with an image recognition module 41, and the image recognition module 41 analyzes and recognizes the image shot by the image shooting unit 3 to obtain a recognition result.
Further, the recognition result includes the name of the traffic signboard and the coordinate position of the traffic signboard.
In a preferred embodiment, the recognition result further includes the integrity of the traffic sign, which is represented by the similarity between the captured image and the stored image of the traffic sign.
The recognition module 41 is provided with an image recognition model, and the image recognition model is preferably a neural network model, and more preferably a CNN model, which has a high image processing speed and a high accuracy.
The traffic sign is manufactured according to the standard and comprises a background bottom plate and a counter reflector, wherein the counter reflector is a high-intensity reflecting film, the imaging brightness of the counter reflector and the imaging brightness of the background bottom plate are obviously different under the same illumination condition, and the imaging brightness of the counter reflector and the imaging brightness of the background bottom plate are different under different illumination conditions.
The traditional identification module does not consider the characteristics of a background bottom plate and a reverse reflector of a traffic sign board, only analyzes and identifies images, can complete identification, but is influenced by different ambient light, has large data amount to be processed, causes slow identification speed, is easy to identify errors even under the condition of extremely poor ambient light, and cannot complete identification under the condition of no light at night.
In the present invention, the camera unit 3 is further provided with a flash 31 and an illumination sensor 32, an attention mechanism is added to an image recognition model in the recognition module 41, retroreflective brightness and background brightness are determined according to spectral irradiance information and camera unit parameters, and then the retroreflective brightness and the background brightness are used as parameters of the attention mechanism, so that data processing amount is reduced, thereby improving recognition accuracy and accelerating image recognition speed.
In the present invention, the type of the illumination sensor 32 is not particularly limited, and an illumination sensor having a range of 0 to 20 million lux is preferable.
The illumination sensor 32 is arranged at the top of the unmanned aerial vehicle, and the direction of the detection surface of the illumination sensor 32 is opposite to that of the lens shooting surface of the camera unit 3, so that the illumination intensity of the illumination sensor 32 is the same as that of the traffic sign board.
In a preferred embodiment, when the illumination intensity detected by the illumination sensor 32 is higher than a preset value, the flash lamp 31 is not activated, so as to save the electric power and increase the cruising ability of the unmanned aerial vehicle 1.
The exposure angle of the flash lamp 31 relative to the traffic sign board and the shooting angle of the lens relative to the traffic sign board have obvious influence on the brightness of the retroreflector, when the heights of the flash lamp 31, the lens and the traffic sign board are consistent, the exposure angle and the shooting angle are optimal, the brightness difference between the retroreflector and the background is obvious, and an image with higher identification degree can be obtained.
In a preferred embodiment, the height of the unmanned aerial vehicle 1 is adjusted to adjust the exposure angle and the shooting angle, so that exposure shooting can be performed better, and an obtained image can be identified more accurately and quickly.
Because the retro-reflector has different light wave reflection characteristics on different wave bands and has better reflection effects on red light and near infrared wave bands, the flash lamp 31 preferably adopts a xenon flash lamp with a spectrum peak value within the range of 600-900 nm.
More preferably, a high pass filter is disposed on flash lamp 31 to reduce the low spectrum light generated by flash lamp 31, and a high pass filter of 650nm is preferably used to make the difference between the brightness of the retroreflector and the background brightness larger.
Surprisingly, because the influence of the light of the red light and the light of the near infrared band on the vision of the driver is small, the light with the low spectral peak value, which has large influence on the vision of the driver, generated by the flash lamp 31 is filtered through the high-pass filter, the influence of the flash lamp 31 on the normal driving of the driver of the passing vehicle is extremely low, and the safety of the road is effectively ensured.
Since the visible light lens has a low response rate to light with a spectrum higher than 700nm, which is not favorable for imaging, in a preferred embodiment, the image capturing unit 3 includes two cameras, namely a visible light camera and a low-illumination black-and-white camera, wherein the low-illumination black-and-white camera captures images when the flash 31 is in operation.
More preferably, the shutter speed of the low-illumination black-and-white camera is less than 2us to suppress the background reflected light as much as possible.
Adopt unmanned aerial vehicle to carry on camera unit 3, be different from the fixed camera shooting on ground, must produce the vibration when shooing, the image of shooing can have and rock, when traditional image recognition, can reduce the ambiguity through directly falling the noise treatment to the image, perhaps does not consider the ambiguity completely, directly discerns. However, when the method identifies the images shot by the unmanned aerial vehicle, the identification accuracy is seriously insufficient, the identification speed is slowed, the identification degree is reduced, and the like, and even the phenomenon that the images cannot be identified occurs.
According to a preferred embodiment of the present invention, a shake detection module 42 is further disposed in the recognition and detection unit 4, the shake detection module 42 measures and calculates the continuous frame images captured by the camera unit 3 to obtain a shake amount during image capturing, and transmits the shake amount and information of the distance from the unmanned aerial vehicle to the traffic sign to the recognition module 41, and the shake amount and the information are used as input parameters of an image recognition model together with the images to obtain a more accurate recognition effect.
In a more preferred embodiment, the recognition module 41 includes a coarse recognition submodule 411 and a precise recognition submodule 412, where the coarse recognition submodule 411 is used to preliminarily recognize the traffic sign, and has low recognition accuracy and high recognition speed, and is used to determine whether there is a traffic sign in the image, and if there is a traffic sign, adjust the height and position of the drone 1 so that the drone 1 is close to the traffic sign and is consistent with the height of the traffic sign.
The rough recognition sub-module 411 transfers the photographed image to the precise recognition sub-module 412, and the name of the traffic sign is precisely determined by the precise recognition sub-module 412.
Preferably, the coarse identification sub-module 411 and the precise identification sub-module 412 are both CNN models, and the convolution kernel and the step length in the coarse identification sub-module 411 are larger than those in the precise identification sub-module 412, so that the coarse identification sub-module 411 can quickly screen out the suspected traffic sign area, and the precise identification sub-module 412 can precisely identify the name of the traffic sign.
In a more preferred embodiment, the shake detection module 42 obtains a shake amount by comparing the areas corresponding to the traffic signs in the images transmitted by the coarse recognition submodule 412 while performing recognition by the fine recognition submodule, and further obtains a shake amount by comparing consecutive multi-frame images, specifically, a midpoint of the area corresponding to the traffic signs in the multi-frame images is used as a feature point, coordinates of the feature point are fitted, a predicted coordinate of the feature point in the next frame image is obtained according to a fitting result, an actual coordinate of the feature point in the next frame image is compared with the predicted coordinate, a coordinate deviation vector is obtained, and the coordinate deviation vector is used as a shake amount.
In a preferred embodiment, a sign integrity detection module 43 is further disposed in the identification detection unit 4, and the sign integrity detection module 43 is configured to detect whether the sign is complete or whether a partial occlusion phenomenon exists.
Further, the sign integrity detection module 43 is a neural network model, preferably a CNN model, and detects and identifies the integrity of the captured traffic sign by comparing the captured image with a standard image in a database.
On the other hand, the invention provides an intelligent detection method for traffic engineering environment, which comprises the steps of carrying out shooting, identification and detection along a road through an unmanned aerial vehicle carrying camera unit, determining whether the name and the position of a road traffic sign are consistent with a road traffic sign construction drawing or not, and preferably, detecting whether the traffic sign is damaged or not.
Specifically, the method comprises the following steps:
s1, the unmanned aerial vehicle flies along the road to preliminarily identify the traffic sign board;
s2, accurately identifying the traffic sign board;
and S3, comparing the identification result with the construction drawing to obtain a detection result.
In step S1, a navigation module is provided in the drone, so that the drone can fly along the road according to the navigation instruction.
Furthermore, the unmanned aerial vehicle shoots images along the road through the camera shooting unit, primary identification is carried out on the images through the coarse identification submodule, and whether the traffic sign board exists in the images or not is detected.
In a preferred embodiment, the navigation module further records a theoretical position of the traffic sign, and the theoretical position is obtained according to a construction drawing of the road traffic sign.
Further, when the unmanned aerial vehicle flies to the position near the theoretical position of the traffic sign board, the camera shooting unit is started, and the camera shooting unit is closed after the unmanned aerial vehicle leaves the position near the theoretical position of the traffic sign board, so that electric energy is saved, and the cruising ability of the unmanned aerial vehicle is improved.
The vicinity of the theoretical position of the traffic sign board is within a range of 10-50 meters, preferably within a range of 30 meters, with the theoretical position of the traffic sign board as a center.
In the present invention, the rough identification module is not particularly limited as long as it can quickly identify an area that may be a traffic sign from an image, and is preferably a neural network model, and more preferably a CNN model.
In step S2, the area that is likely to be a traffic sign is accurately identified by the accurate identification submodule, and the name of the traffic sign and the coordinate position of the traffic sign are acquired.
In a preferred embodiment, after the coarse recognition model recognizes the traffic sign, step S2 is preceded by:
and S20, adjusting the flight attitude by the unmanned aerial vehicle to enable the camera shooting unit to look at the traffic sign board, so that the shot traffic sign board has smaller horizontal and vertical inclination, and the accurate recognition sub-model can finish recognition more accurately and rapidly.
In the invention, the camera unit is used for looking forward at the traffic sign board, namely, an included angle between a connecting line between a central point of a camera unit lens and a central point of the traffic sign board and a central axis perpendicular to the plane of the traffic sign board is less than 1 DEG
In a preferred embodiment, in step S2, the unmanned aerial vehicle is in a hovering state during the shooting of the image by the camera unit, so as to ensure the sharpness of the image.
The traditional traffic sign board identification only analyzes and identifies images, and the analysis and identification mode takes the traffic sign board as a common scenery, does not consider the characteristics of a background bottom board and a retroreflector of the traffic sign board, and does not utilize the characteristics of the background bottom board and the retroreflector of the traffic sign board to reduce the analysis difficulty and improve the accuracy.
Surprisingly, as shown in table one, the reflection coefficients of the counter reflector are significantly different at different exposure angles and shooting angles, when the camera unit is looking at the traffic sign, the heights of the flash lamp, the lens and the traffic sign are approximately the same, and the exposure angle and the shooting angle are both smaller, so that the brightness difference between the counter reflector and the background is significant, and an image with higher identification can be obtained.
Watch 1
Figure BDA0002661285020000101
According to the traditional image recognition of the traffic sign board, the pictures are subjected to gray level processing, recognition is carried out after processing, the pictures are changed into a single channel through the gray level processing, although the picture iteration effect is faster, interference information is increased, and the recognition rate is low.
The difference degree between the brightness of a retroreflector of the traffic sign board and the background brightness is very large, and how to apply the difference degree to the identification of the traffic sign board enables an image identification model to quickly notice the traffic sign board in an image, and the difficulty of the invention lies in actively neglecting other background images.
The inventor sets an attention mechanism in an image recognition model through keen research, and takes the brightness of a retroreflector and the background brightness as parameters of the attention mechanism, so that the traffic sign is accurately recognized.
The attention mechanism is a data processing method in machine learning, is widely applied to various different types of machine learning tasks such as natural language processing, image recognition, voice recognition and the like, and can perform weighted adjustment on the attention direction of a model according to specific task targets, so that the model pays more attention to partial contents and can be attached to various neural network models.
The inventor finds that the brightness of the retroreflector and the background brightness also change along with the change of the light environment, and through a great deal of research, the inventor finds that the spectral irradiance information and the sunlight irradiation angle have a large influence on the brightness of the retroreflector and the background brightness.
Specifically, the inventors have studied and proposed that the brightness of a retroreflective article can be represented by the following formula:
Figure BDA0002661285020000111
the background brightness can be represented by the following formula:
Figure BDA0002661285020000112
wherein A isdIs the detector area of the lens CCD pixel, t is the exposure time of single frame image, r (gamma) is the lens spectral response function, G is the lens amplification gain coefficient, F is the reciprocal of the lens relative hole aperture, tau0The transmittance of a lens optical system is shown, N represents the influence of lens dark current noise, reading noise, quantization noise and photon noise, and the parameters are intrinsic parameters of the lens and are obtained before an equipment head is installed;
Pγ0.65, reflectance of diffuse reflection from polished aluminum plate, R2AA regression coefficient representing a reflection coefficient of the traffic sign reflective film;
d represents the distance between the lens and the traffic sign board and is obtained by unmanned aerial vehicle infrared distance measurement;
Erepresenting the spectral irradiance intensity of the flash lamp; eRepresenting the spectral irradiance of sunlight, obtained by an illumination sensor;
β2the irradiation angles of the sunlight relative to the advancing direction of the automobile are represented, the irradiation angles of the sunlight at different dates and times are recorded in the model in different areas in advance, and the sunlight is obtained by inquiring the dates and the times and the direction of the gyroscope of the unmanned aerial vehicle.
The inventor finds that when the spectral irradiance is higher than a certain value, the light supplement for starting the flash lamp is not needed, and the identification result of the accurate identification sub-module is more accurate.
In a preferred embodiment, when the spectral irradiance is higher than 6-9 ten thousand lux, preferably 8 ten thousand lux, a flash lamp does not need to be started, so that electric energy is saved, and the cruising ability of the unmanned aerial vehicle is increased.
In a preferred embodiment, when the flash lamp is started, a low-illumination black and white camera is used for shooting to obtain an image with a simpler background, so that the identification accuracy of the traffic sign board is improved; when the flash lamp is not started, a visible light camera is used for shooting.
The accurate identification submodule is a neural network model, preferably a CNN model, and the CNN model has the advantages of high image processing speed and high accuracy and is widely applied to image processing analysis.
Further, the accurate recognition sub-module takes the image shot by the lens as an input of the image recognition model and takes the name of the traffic sign or the name of the traffic sign as an output.
In a preferred embodiment, before the accurate identification submodule identifies, the area range of the traffic sign board in the image is preliminarily judged through the rough identification submodule so as to accelerate the identification speed.
By disassembling the image identification process into the preliminary determination range and the accurate identification process, the calculated amount of the identification process is greatly reduced, and the identification speed is improved.
The inventor finds that although the unmanned aerial vehicle is in a hovering state when shooting, stability is greatly improved, vibration is still generated, images shot by a lens are blurred, and the recognition degree of the accurate recognition sub-model is reduced due to the blurred images.
The general processing method for the blurred image is to perform noise reduction processing on the image through an algorithm, and as the focus of the technology is completely the image, the image is only restored through the algorithm, the restoration speed and the accuracy after restoration are poor, and when the method is applied to accurately identifying the sub-model, the effect is not good.
The inventors have intensively studied to input the amount of shaking at the time of image capturing, the distance from the unmanned aerial vehicle to the traffic sign, and the captured image together as an image recognition model, and preferably, the amount of shaking at the time of image capturing, the distance from the unmanned aerial vehicle to the traffic sign, and an image of a traffic sign area determined by a rough recognition submodel as inputs of an accurate recognition submodel.
Specifically, the detection model further comprises a shake detection model, when the rough recognition sub-model recognizes the traffic sign area, the images are transmitted to the shake detection model and the accurate recognition sub-model, and the images transmitted by the rough recognition sub-model are continuous frames because the lens is continuously shot.
And the shake detection model takes the middle point of the corresponding area of the traffic sign in each frame of the received image as a characteristic point, fits the coordinates of the characteristic points in the continuous frames to obtain a fitted curve, can predict the coordinates of the characteristic points of the next frame of the image according to the fitted curve, and calls the predicted coordinates of the characteristic points as predicted coordinates.
And at the next moment, when the shake detection model obtains the rough recognition sub-model and transmits the next frame of image, comparing the characteristic point coordinates of the obtained image with the predicted coordinates to obtain a coordinate deviation vector, and taking the coordinate deviation vector as the shake quantity.
In a preferred embodiment, after the traffic sign is precisely identified, the traffic sign integrity detection module checks whether the traffic sign is intact.
Specifically, multi-frame images continuously shot by a camera in accurate identification are subjected to image fusion, the fused images are used as the input of a signboard integrity detection module, and the similarity between the images and the standard images of the traffic signboards is used as the output.
Further, the image fusion is obtained by performing Gaussian operation fusion on pixels at the same position in the multi-frame images.
When the similarity of the output is lower than a certain threshold value, for example, the similarity is lower than 80%, the traffic sign is considered to be damaged.
In step S3, after the traffic sign is accurately identified, comparing the identification result with the construction drawing, including comparing the position of the traffic sign with the name of the traffic sign, to obtain a detection result;
the detection result comprises the deviation distance between the traffic sign and the construction drawing and whether the name of the traffic sign is correct or not.
The positions of the traffic signs are obtained through unmanned aerial vehicle GPS, unmanned aerial vehicle gyroscope and unmanned aerial vehicle infrared distance measurement calculation.
In a preferred embodiment, the detection result further includes whether the traffic sign is damaged.
Examples
Example 1
The traffic engineering environment intelligent detection is carried out by using an unmanned aerial vehicle, the traffic sign identification test is carried out on a newly built road, 200 traffic signs are tested on a road section, wherein a xenon flash lamp is adopted as a flash lamp, the spectrum peak value is 800nm, the light-emitting power is 500W, and a 650nm high-pass filter is arranged on the flash lamp 31; the unmanned aerial vehicle is provided with a visible light camera and a low-illumination black and white camera; the image identification model comprises a coarse identification submodel, an accurate identification submodel and a shake detection model, wherein the input of the accurate identification submodel is the shake amount when the image is shot, the distance from the unmanned aerial vehicle to the traffic sign and the image of a suspected traffic sign area identified by the coarse identification submodel; an attention mechanism is arranged in the accurate identification submodel, and parameters of the attention mechanism are the brightness of the retroreflector and the background brightness;
in the process of identification and detection, the method comprises the following steps:
1) and the unmanned aerial vehicle flies along the road, and the unmanned aerial vehicle flies to the theoretical position of the traffic sign board to initially identify the traffic sign board.
2) After the primary identification, the unmanned aerial vehicle adjusts the flight attitude, so that the camera shooting unit looks at the traffic sign board.
3) The unmanned aerial vehicle hovers, the spectral irradiance is detected, and when the spectral irradiance is lower than or equal to 8 ten thousand luxes, a flash lamp and a black-and-white camera are started to shoot to obtain an image; when the spectral irradiance is higher than 8 million lux, a flash lamp is not started, and a visible light camera is used for shooting to obtain an image;
the obtained image is roughly identified by using a rough identification model to preliminarily judge the area range of the traffic sign board in the image, the area range of the traffic sign board is transmitted to an accurate identification module to accurately identify the traffic sign board,
in the accurate identification process, the brightness of the retroreflector and the background brightness are used as parameters of an attention mechanism, and the shaking amount when the image is shot, the distance from the unmanned aerial vehicle to the traffic sign board and the shot image are jointly used as the input of the accurate identification module.
Example 2
The setup was the same as in example 1 except that the flash lamp was a xenon flash lamp having a spectral peak of 400nm and an output power of 500W.
Example 3
The same setup as in example 1 was used except that a low-illumination monochrome camera and a xenon flash lamp having a spectral peak of 800nm were not provided, and only a normal led flash lamp was provided, and all images were taken by a visible light camera.
Example 4
The same arrangement as that of embodiment 1, except that the unmanned aerial vehicle does not adjust the posture and hover during accurate identification, and the lens does not look at the traffic sign board.
Example 5
The same arrangement as in embodiment 1 is different in that a shake detection model is not provided.
Example 6
The same arrangement as in example 1 was made except that no attention mechanism was provided.
The recognition rates in examples 1 to 6 were counted, and the results are shown in Table I,
watch two
Item Recognition rate
Example 1 100%
Example 2 99.5%
Example 3 95.5%
Example 4 94%
Example 5 96.5%
Example 6 91%
In the description of the present invention, it should be noted that the terms "upper", "lower", "inner" and "outer" indicate the orientation or positional relationship based on the operation state of the present invention, and are only for convenience of description and simplification of description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation and be operated, and thus should not be construed as limiting the present invention.
The present invention has been described above in connection with preferred embodiments, but these embodiments are merely exemplary and merely illustrative. On the basis of the above, the invention can be subjected to various substitutions and modifications, and the substitutions and the modifications are all within the protection scope of the invention.

Claims (10)

1. An intelligent traffic engineering environment detection system comprises an unmanned aerial vehicle (1), a navigation unit (2), a camera unit (3) and an identification detection unit (4).
2. The traffic engineering environment intelligent detection system according to claim 1,
the navigation unit (2), the camera unit (3) and the identification detection unit (4) are mounted on the unmanned aerial vehicle (1), an unmanned aerial vehicle flight path is preset in the navigation unit (2), and the path is set according to an actual path of a road, so that the unmanned aerial vehicle can fly along the road;
the camera shooting unit (3) is used for shooting images and transmitting the images to the recognition detection unit (4);
an image recognition module (41) is arranged in the recognition detection unit (4), and the image shot by the camera unit (3) is analyzed and recognized through the image recognition module (41) to obtain a recognition result.
3. The traffic engineering environment intelligent detection system according to claim 1,
the camera unit (3) is also provided with a flash lamp (31) and an illumination sensor (32).
4. The traffic engineering environment intelligent detection system according to claim 3,
the unmanned aerial vehicle top is arranged in illumination sensor (32), and the detection face of illumination sensor (32) is opposite with the camera lens shooting face direction of camera unit (3).
5. The traffic engineering environment intelligent detection system according to claim 4,
the flash lamp (31) is a xenon flash lamp with a spectrum peak value within the range of 600-900 nm.
6. The traffic engineering environment intelligent detection system according to claim 4,
the flash lamp (31) is provided with a high-pass filter.
7. The traffic engineering environment intelligent detection system according to claim 4,
the camera unit (3) comprises two cameras, namely a visible light camera and a low-illumination black and white camera.
8. The traffic engineering environment intelligent detection system according to claim 2,
an image recognition model is arranged in the recognition module (41), and the image recognition model is preferably a neural network model, and more preferably a CNN model.
9. An intelligent traffic engineering environment detection method, which adopts the intelligent traffic engineering environment detection system as claimed in any one of claims 1 to 8, and carries out shooting, identification and detection along a road through an unmanned aerial vehicle carrying camera unit to determine whether the name and the position of a road traffic sign are consistent with a road traffic sign construction drawing or not.
10. The traffic engineering environment intelligent detection method according to claim 9,
the method comprises the following steps:
s1, the unmanned aerial vehicle flies along the road to preliminarily identify the traffic sign board;
s2, accurately identifying the traffic sign board;
and S3, comparing the identification result with the construction drawing to obtain a detection result.
CN202010905521.7A 2020-09-01 2020-09-01 Traffic engineering environment intelligent detection system Active CN112071078B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010905521.7A CN112071078B (en) 2020-09-01 2020-09-01 Traffic engineering environment intelligent detection system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010905521.7A CN112071078B (en) 2020-09-01 2020-09-01 Traffic engineering environment intelligent detection system

Publications (2)

Publication Number Publication Date
CN112071078A true CN112071078A (en) 2020-12-11
CN112071078B CN112071078B (en) 2022-11-08

Family

ID=73665952

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010905521.7A Active CN112071078B (en) 2020-09-01 2020-09-01 Traffic engineering environment intelligent detection system

Country Status (1)

Country Link
CN (1) CN112071078B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108012098A (en) * 2017-11-26 2018-05-08 合肥赛为智能有限公司 A kind of unmanned plane traffic inspection method
US20180174448A1 (en) * 2016-12-21 2018-06-21 Intel Corporation Unmanned aerial vehicle traffic signals and related methods
CN109754485A (en) * 2018-12-27 2019-05-14 高戎戎 A kind of road furniture cruising inspection system based on radio frequency identification
CN109782364A (en) * 2018-12-26 2019-05-21 中设设计集团股份有限公司 Traffic mark board based on machine vision lacks detection method
CN110501018A (en) * 2019-08-13 2019-11-26 广东星舆科技有限公司 A kind of traffic mark board information collecting method for serving high-precision map producing
CN110659550A (en) * 2018-06-29 2020-01-07 比亚迪股份有限公司 Traffic sign recognition method, traffic sign recognition device, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180174448A1 (en) * 2016-12-21 2018-06-21 Intel Corporation Unmanned aerial vehicle traffic signals and related methods
CN108012098A (en) * 2017-11-26 2018-05-08 合肥赛为智能有限公司 A kind of unmanned plane traffic inspection method
CN110659550A (en) * 2018-06-29 2020-01-07 比亚迪股份有限公司 Traffic sign recognition method, traffic sign recognition device, computer equipment and storage medium
CN109782364A (en) * 2018-12-26 2019-05-21 中设设计集团股份有限公司 Traffic mark board based on machine vision lacks detection method
CN109754485A (en) * 2018-12-27 2019-05-14 高戎戎 A kind of road furniture cruising inspection system based on radio frequency identification
CN110501018A (en) * 2019-08-13 2019-11-26 广东星舆科技有限公司 A kind of traffic mark board information collecting method for serving high-precision map producing

Also Published As

Publication number Publication date
CN112071078B (en) 2022-11-08

Similar Documents

Publication Publication Date Title
US11087151B2 (en) Automobile head-up display system and obstacle prompting method thereof
JP6461406B1 (en) Apparatus, system and method for improved face detection and recognition in a vehicle inspection security system
US9317754B2 (en) Object identifying apparatus, moving body control apparatus, and information providing apparatus
CN110415544B (en) Disaster weather early warning method and automobile AR-HUD system
AU2009270324A1 (en) Detection of vehicles in images of a night time scene
CN112101147A (en) Vehicle-mounted intelligent recognition detection system
US20230015771A1 (en) Methods for detecting phantom projection attacks against computer vision algorithms
KR102392822B1 (en) Device of object detecting and tracking using day type camera and night type camera and method of detecting and tracking object
CN108021926A (en) A kind of vehicle scratch detection method and system based on panoramic looking-around system
CN108225277A (en) Image acquiring method, vision positioning method, device, the unmanned plane of unmanned plane
CA3190528A1 (en) Systems and methods for rapid license plate reading
US11176397B2 (en) Object recognition device
CN112071078B (en) Traffic engineering environment intelligent detection system
CN103986912B (en) Double-direction real-time vehicle chassis image synthetic method based on civil IPC
CN110458815B (en) Method and device for detecting foggy scene of automatic driving
CN106991415A (en) Image processing method and device for vehicle-mounted fisheye camera
CN111369806A (en) Method and device for photographing, measuring speed and identifying license plate of freight train
CN207923146U (en) A kind of LED light tracing system under complex scene based on Kalman filtering
KR102491684B1 (en) Control method of a surveillance camera with a sunshield and control method thereof
CN207115438U (en) Image processing apparatus for vehicle-mounted fisheye camera
CN115965934A (en) Parking space detection method and device
JP7426987B2 (en) Photography system and image processing device
CN112954309A (en) Test method for target tracking effect on vehicle based on AR-HUD augmented reality
WO2013062401A1 (en) A machine vision based obstacle detection system and a method thereof
WO2023095679A1 (en) Visual confirmation status determination device and visual confirmation status determination system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230529

Address after: 101316 yard 5, jinmayuan 1st Street, Shunyi District, Beijing

Patentee after: CATS TESTING TECHNOLOGY (BEIJING) CO.,LTD.

Patentee after: JIAOKEYUAN SCIENCE AND TECHNOLOGY GROUP CO.,LTD.

Address before: 6 / F, building 2, No. 240, Huixinli, Chaoyang District, Beijing 100029

Patentee before: CATS TESTING TECHNOLOGY (BEIJING) CO.,LTD.