CN113077494A - Road surface obstacle intelligent recognition equipment based on vehicle orbit - Google Patents
Road surface obstacle intelligent recognition equipment based on vehicle orbit Download PDFInfo
- Publication number
- CN113077494A CN113077494A CN202110387405.5A CN202110387405A CN113077494A CN 113077494 A CN113077494 A CN 113077494A CN 202110387405 A CN202110387405 A CN 202110387405A CN 113077494 A CN113077494 A CN 113077494A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- image
- tracking
- track
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 claims abstract description 38
- 238000000034 method Methods 0.000 claims abstract description 38
- 238000000605 extraction Methods 0.000 claims abstract description 22
- 206010039203 Road traffic accident Diseases 0.000 claims abstract description 8
- 238000010276 construction Methods 0.000 claims abstract description 7
- 238000012423 maintenance Methods 0.000 claims abstract description 6
- 230000008569 process Effects 0.000 claims abstract description 6
- 230000015572 biosynthetic process Effects 0.000 claims description 15
- 238000003786 synthesis reaction Methods 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 14
- 230000008859 change Effects 0.000 claims description 10
- 230000008030 elimination Effects 0.000 claims description 10
- 238000003379 elimination reaction Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 10
- 238000004458 analytical method Methods 0.000 claims description 9
- 230000010339 dilation Effects 0.000 claims description 9
- 230000003628 erosive effect Effects 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 9
- 239000003086 colorant Substances 0.000 claims description 8
- 238000012935 Averaging Methods 0.000 claims description 6
- 230000000877 morphologic effect Effects 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 claims description 4
- 240000007651 Rubus glaucus Species 0.000 claims description 3
- 235000011034 Rubus glaucus Nutrition 0.000 claims description 3
- 235000009122 Rubus idaeus Nutrition 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000004456 color vision Effects 0.000 claims description 3
- 238000004891 communication Methods 0.000 claims description 3
- 239000013256 coordination polymer Substances 0.000 claims description 3
- 238000011049 filling Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 238000005286 illumination Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 description 6
- 238000011156 evaluation Methods 0.000 description 4
- 238000007726 management method Methods 0.000 description 3
- 239000011384 asphalt concrete Substances 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 2
- 239000004568 cement Substances 0.000 description 2
- 239000004567 concrete Substances 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a road surface obstacle intelligent identification device based on vehicle track, which is developed by adopting a target detection and target tracking method for an image acquired by a camera, tracks a vehicle to obtain a motion track of the vehicle, judges whether an obstacle, a traffic accident and maintenance construction conditions exist at a certain position of a road according to the motion track, and comprises an image acquisition component, a vehicle target detection component, a vehicle target tracking and track extraction component and an obstacle early warning component; the method has the advantages of low cost, strong real-time performance, scientific and technological content, capability of solving the problems of unobvious obstacle characteristics, long distance and the like in the detection process, improves the intelligent level of road obstacle identification and management, and obviously improves the safety performance and service level of the road and the emergency handling capability.
Description
Technical Field
The invention relates to the technical field of foreign matter detection and identification in the field of image identification, in particular to road surface obstacle intelligent identification equipment based on vehicle tracks.
Background
The construction of road networks in China is more and more modern, the road planning is more reasonable, and the management technology is more perfect; through a retrieved patent application publication number CN107909012A, a real-time vehicle tracking detection method and device based on a disparity map comprises the following steps: carrying out image processing on the obtained road surface image to obtain a suspected vehicle area; vehicle detection is carried out on the suspected vehicle area through a preset detection model, and initial positions and distance information of all vehicles in the suspected vehicle area are obtained; evaluating the initial position and distance information of each vehicle through a stability evaluation algorithm to obtain an evaluation result; judging whether the evaluation result meets a preset standard or not, and if the evaluation result meets the preset standard, performing tracking detection on each corresponding vehicle starting tracking algorithm;
it has the following disadvantages: the current management mode of road traffic in China mainly combines a mode of setting management posts according to sections and video monitoring assistance, and monitoring videos are mainly used for accident reason inquiry and responsibility pursuit after a traffic accident happens, so that adverse effects caused by obstacles cannot be fundamentally solved.
Disclosure of Invention
The invention aims to provide road obstacle intelligent identification equipment based on vehicle tracks so as to overcome the defects in the prior art.
In order to achieve the purpose, the invention adopts the technical scheme that:
the intelligent recognition equipment comprises an image acquisition component, a vehicle target detection component, a vehicle target tracking and track extraction component and an obstacle early warning component, wherein the road surface refers to a road surface and an urban road surface which use an automobile as a service object, and comprises an asphalt concrete road surface, a cement concrete road surface and a masonry road surface. The obstacle refers to an article which poses a threat to the driving of the automobile and comprises the following items: obstacles, potholes, traffic accidents, and maintenance and construction. The intelligent identification is carried out by adopting a target detection and target tracking method on the image acquired by the camera. The intelligent identification is to track the vehicle to obtain the motion trail of the vehicle and judge whether an obstacle, a traffic accident and maintenance and construction conditions exist at a certain position of the road according to the motion trail.
Preferably, the image acquisition component is a camera and a stand column, the acquired image information comprises videos and snap pictures, the farthest identification distance of the camera is controlled to be 200 meters on the premise that the accuracy of the algorithm and the identification are accurate, the identification edge of the camera is parallel to the vehicle running direction, the stand column for erecting the camera is designed to be shockproof, the rigidity is improved from the four aspects of the foundation, the material, the section and the structure, and the quality of the structure and the damped spatial distribution section are changed.
Preferably, the vehicle object detection means includes a background extraction and a vehicle extraction, and the background extraction step is as follows: under an ideal scene, a moving vehicle is considered as noise in a background image, and due to the diversity of the vehicles, the road surface and the brightness of the vehicle in the scene are different. Comparing the brightness of the vehicle with the brightness of the road surface, the following occurs: some vehicles have higher brightness than the road surface, and some vehicles have lower brightness than the road surface, and they may be equal to each other. The noise is eliminated by adopting a method of accumulating a plurality of images and then averaging the images. Therefore, the background of the scene is obtained by a method of accumulating and then averaging consecutive images in a video, and the formula is as follows:
wherein Background (x, y) is a Background image in the video, Ii (x, y) is an ith frame image, and N represents the number of extracted Background video frames.
The vehicle extraction adopts a color background difference method to extract moving vehicles, and comprises the following steps:
(1) color background difference:
the difference is respectively made between the ith frame color image fi and the color background image BGi in three channels R, G, B, so that a foreground moving object fgi can be extracted, and the formula is as follows:
the image is still composed of RGB three primary colors after the color background difference, the image is subjected to graying processing for simplifying operation, and finally the foreground of the vehicle moving object in the image can be extracted through threshold segmentation.
(2) Morphological treatment:
the foreground object obtained by the color background difference method needs further processing to extract a relatively complete moving vehicle object. The invention adopts morphology and blob filling to process the binary image, and eliminates noise, holes and the like. The most basic morphological operations are erosion and dilation, with an open operation of erosion before dilation and a closed operation of dilation before erosion.
Vehicle shadow removal:
in the daytime, when strong sunlight is irradiated, the vehicle can shadow on the road surface due to the projection of the sunlight. If the shadow is eliminated without taking appropriate measures, the shadow will affect the adjacent lanes, resulting in false detection. In order to improve the detection accuracy, the foreground image is subjected to shadow elimination. The detection of moving vehicles is often done outdoors where sunlight and ground reflected light can shadow the vehicle. Shadows are areas with lower brightness than the background, and detection algorithms for shadows are studied based on the mechanism by which they are formed. In a traffic video detection scene, the brightness model of the pixel point (i + j) is as follows:
Si(x,y)=Ei(x,y)Ri(x,y)
wherein Si (x, y) represents the brightness of the pixel point (x, y) at the moment i, Ri (x, y) represents the reflection coefficient, and Ei (x, y) represents the illumination intensity received by the object surface unit. Ri (x, y) is typically small and can be considered constant. The formula for Ei (x, y) is as follows:
where CA and CP are ambient and luminance, respectively, N (x, y) represents the normal vector of the object surface, and L is the object table
And a vector facing the direction of the light source, wherein k (x, y) represents a loss coefficient of a penumbra formed by the sunlight which is not completely blocked by the object relative to the light energy when no shadow exists (k (x, y) ≦ 1 is more than 0). When k (x, y) is 0, the intensity of light corresponding to the dark shadow formed by the complete blocking of sunlight by the object is constant.
Although the shadow has the same motion property as a moving vehicle, the information such as texture, brightness, edge contour and the like is greatly different. The invention adopts a moving object shadow elimination method of HSV color space. The HSV color space adopts information of hue, saturation, brightness and the like of colors, and is closer to the color vision reflection of human, so that the information of the colors and the gray scale of the moving target and the shadow can be reflected more accurately.
The steps of the shadow elimination algorithm based on the HSV space are as follows:
the HSV space can be obtained by converting the RGB space, namely:
where H denotes hue, S denotes saturation, V denotes brightness, R denotes red, G denotes green, and B denotes blue. In the HSV color space, the hue H and saturation S of the shaded area are substantially unchanged relative to the unshaded foreground area, the greatest difference being that the shaded area is much darker than the unshaded area in terms of brightness V, and the formula is as follows:
where S (x, y) is a shadow mask of the foreground object image at coordinates (x, y), S (x, y) 1 represents a shadow,
is the threshold for saturation and color components respectively,and respectively representing three-channel component values after the K-th frame color image is converted into an HSV space.And respectively representing three-channel component values at (x, y) coordinates after the K-th frame dynamic color background image is converted into the HSV space. According to the HSV color space detection principle, the maximum difference between the shadow and the vehicle on the image is the difference in brightness, and shadow elimination is finally achieved. The method has great help to the accuracy of vehicle detection, eliminates the interference of shadow and improves the accuracy of vehicle detection.
Preferably, the vehicle tracking and trajectory extracting component finds and extracts vehicle targets in real time in the video sequence, continuously tracks the vehicle targets in the detection area, calculates and draws the motion trajectory of the vehicle targets according to vehicle characteristics, and provides a data base for subsequent motion analysis. The moving vehicle tracking of the invention utilizes a Kalman filtering tracking algorithm, designs a Kalman filtering model according to multi-feature matching, and extracts the vehicle movement track according to the vehicle center coordinate and the vehicle speed as matching features. The algorithm can predict the motion information of the next state of the vehicle moving target, can narrow the searching range of the moving vehicle, and improves the real-time performance and the reliable precision of the searching by the method. The method comprises the steps of utilizing Kalman filtering to track a vehicle, firstly selecting target characteristics, selecting the center coordinates of the vehicle as the target characteristics, and calculating the movement speed of a vehicle target by tracking the center coordinates. And then, matching of feature similarity is carried out, so that similar target vehicles can be conveniently searched. And finally, establishing parameter selection of a linear dynamic model and Kalman filtering. The method comprises the following specific steps:
(1) tracking vehicle centroid extraction
And a vehicle is framed by a circumscribed rectangle in the image processing, and the center of the rectangle is the center of mass of the vehicle. The coordinates of the diagonal points of the rectangle can be obtained according to the rectangle: lower left corner coordinate a (x0, y0), upper right corner coordinate B (x1, y 1).
The center coordinate can be obtained by aligning the coordinates of the corner pointsWhereinThe movement speed of the mass center can be obtained according to the mass centervy,To indicate the speed of movement of the vehicle in the x-axis direction, vyTo indicate that the vehicle is on the y-axisThe speed of movement of the direction thereby estimates the speed of movement of the moving vehicle. Whereink denotes a k-th frame image, and i denotes an i-th coordinate in the k-th frame image.
(2) Centroid matching method
Considering that the time difference between two consecutive frames is relatively short, the centroid of the vehicle does not generate a large position shift, so that the position of the same vehicle does not change much between two consecutive frames. Based on the above analysis, a vehicle matching rule is proposed: the centroid distance of the same vehicle between two consecutive frames does not change much. A threshold value may be set, and the distance between two points may be compared with the threshold value to determine whether the vehicles are the same vehicle. Assuming that the ith day mark in the k frame is to be tracked, the distances between all vehicle targets in the k +1 frame and the ith target in the next frame are calculated, assuming that the calculation is performed with the jth target in the k +1 frame:
wherein:representing the centroid coordinates of the ith vehicle object in the kth frame,the centroid coordinate of the jth vehicle target in the (k + 1) TH frame is represented, | D (i, j) | represents the distance between two adjacent vehicle targets, TH is a set distance threshold, if the distance between two adjacent vehicle targets is less than or equal to the threshold, the matching is considered to be the same vehicle successfully, and if the distance between two adjacent vehicle targets is greater than the threshold, the matching is considered to be unsuccessful, and the matching is not the same vehicle.
(3) Selection of Kalman filter parameters
According to an actual video image, assuming that a vehicle target does uniform motion between two adjacent frames (the time difference between two continuous frames is small), a dynamic model in which kalman filtering is linear is set as follows:
the motion state vector is:
xk=[xk,yk,vx,k,vy,k]T
wherein xk and yk are coordinates of the mass center of the vehicle, and vx, k, vy and k represent the movement speed of the coordinates of the mass center of the vehicle;
the transfer matrix Ak is:
Δ t represents the time interval between two frames.
The observation matrix Ck is:
the covariance matrix of white gaussian noise Wk-1, Vk is:
estimation error:
tracking a vehicle can be divided into the following steps:
firstly, extracting and calculating the coordinates of the mass center of the vehicle by using the tracking result of the vehicle;
initializing a Kalman filter;
carrying out vehicle mass center matching on the current frame vehicle target and the previous frame;
fourthly, after the matching is successful, updating the filter to record the current vehicle information to prepare for the next matching and tracking;
(4) trajectory display
The motion trail of the vehicle center of mass can be obtained by matching and tracking the target in the multi-frame image in the traffic video. Firstly, a tracking area is set in a video, tracking is started after a moving vehicle enters the tracking area, and coordinates of the mass center of each vehicle are recorded, so that a tracking sequence is obtained, and the following definition is as follows:
Trackcar=[(xi,yi);(xi+1,yi+1);(xi+2,yi+2);...(xi+n,yi+n)]
when the trajectories of a plurality of continuous vehicles in a certain area change and the changes are similar, the situation that an obstacle or a pothole exists in the area can be judged, and the existence of the obstacle in the front can be judged through the vehicle trajectories.
Preferably, the obstacle warning component sends warning information according to the vehicle track determined by the vehicle target tracking and track extracting component, and the warning information includes an image and a sound. After the vehicle target tracking and track extracting component finishes extracting the motion track of the vehicle, the vehicle track information is transmitted to the early warning component, and the vehicle track is displayed at the computer terminal, so that the image information early warning of the barrier is finished. And writing the vehicle track result into a text file, and converting the text information into voice by using a voice synthesis technology. A voice synthesis chip is adopted and connected with the raspberry group through a serial communication port, and codes are compiled by C language to realize a voice broadcasting function. The speech synthesis is mainly divided into two parts of text analysis and speech synthesis. The system firstly carries out preprocessing such as segmentation, sound velocity labeling and the like on the obtained text to be synthesized according to the vocabulary library and the characteristic vocabulary library, then processes the segmented symbol stream by decoding the sound change rule, obtains the sound stream and then obtains output through voice synthesis, and therefore sound early warning information is sent out.
Compared with the prior art, the invention has the beneficial effects that:
the invention obtains traffic real-time information through the image acquisition component for timely analysis, and can effectively extract vehicle background and vehicle through the vehicle target detection component, wherein vehicle tracking is to find and extract vehicle targets in a video sequence in real time, track the vehicle targets continuously in a detection area, calculate and draw the motion track of the vehicle targets according to vehicle characteristics, and the obstacle early warning component sends early warning information according to the vehicle track determined by the vehicle target tracking and track extraction component, so that early warning can be timely sent after obstacles appear on the road surface, and traffic accidents caused by the obstacles appearing on the road surface are prevented.
Drawings
Fig. 1 is a topological structure diagram of an intelligent road surface obstacle recognition device based on a vehicle track according to the present invention.
Fig. 2 is a technical route diagram of an intelligent road surface obstacle recognition device based on a vehicle track according to the present invention.
In the figure: 1. an image acquisition section; 2. a vehicle target detection component; 3. a vehicle target tracking and trajectory extraction component; 4. obstacle early warning part.
Detailed Description
In order to facilitate understanding of those skilled in the art, the technical solution of the present invention is further specifically described below.
The intelligent recognition equipment for the road surface obstacles based on the vehicle track comprises an image acquisition component 1, a vehicle target detection component 2, a vehicle target tracking and track extraction component 3 and an obstacle early warning component 4, wherein the road surfaces refer to road surfaces and urban road surfaces which use automobiles as service objects, and comprise asphalt concrete road surfaces, cement concrete road surfaces and masonry road surfaces. The obstacle refers to an article which poses a threat to the driving of the automobile and comprises the following items: obstacles, potholes, traffic accidents, and maintenance and construction. The intelligent identification is carried out by adopting a target detection and target tracking method on the image acquired by the camera. The intelligent identification is to track the vehicle to obtain the motion trail of the vehicle and judge whether an obstacle, a traffic accident and maintenance and construction conditions exist at a certain position of the road according to the motion trail.
The image acquisition part 1 is a camera and a stand column, image information is acquired by the camera and the stand column and comprises videos and snap pictures, the farthest identification distance of the camera is controlled to be 200 meters on the premise that the accuracy of an algorithm and the identification accuracy are guaranteed, the identification edge of the camera is parallel to the vehicle running direction, the stand column for erecting the camera is designed to prevent vibration, the rigidity is improved from the four aspects of a foundation, materials, a section and a structure, and the quality of the structure and the damped space distribution section are changed.
The vehicle object detection part 2 comprises background extraction and vehicle extraction, and the background extraction steps are as follows: under an ideal scene, a moving vehicle is considered as noise in a background image, and due to the diversity of the vehicles, the road surface and the brightness of the vehicle in the scene are different. Comparing the brightness of the vehicle with the brightness of the road surface, the following occurs: some vehicles have higher brightness than the road surface, and some vehicles have lower brightness than the road surface, and they may be equal to each other. The noise is eliminated by adopting a method of accumulating a plurality of images and then averaging the images. Therefore, the background of the scene is obtained by a method of accumulating and then averaging consecutive images in a video, and the formula is as follows:
wherein Background (x, y) is a Background image in the video, Ii (x, y) is an ith frame image, and N represents the number of extracted Background video frames.
The vehicle extraction adopts a color background difference method to extract moving vehicles, and comprises the following steps:
(1) color background difference:
the difference is respectively made between the ith frame color image fi and the color background image BGi in three channels R, G, B, so that a foreground moving object fgi can be extracted, and the formula is as follows:
the image is still composed of RGB three primary colors after the color background difference, the image is subjected to graying processing for simplifying operation, and finally the foreground of the vehicle moving object in the image can be extracted through threshold segmentation.
(2) Morphological treatment:
the foreground object obtained by the color background difference method needs further processing to extract a relatively complete moving vehicle object. The invention adopts morphology and blob filling to process the binary image, and eliminates noise, holes and the like. The most basic morphological operations are erosion and dilation, with an open operation of erosion before dilation and a closed operation of dilation before erosion.
Vehicle shadow removal:
in the daytime, when strong sunlight is irradiated, the vehicle can shadow on the road surface due to the projection of the sunlight. If the shadow is eliminated without taking appropriate measures, the shadow will affect the adjacent lanes, resulting in false detection. In order to improve the detection accuracy, the foreground image is subjected to shadow elimination. The detection of moving vehicles is often done outdoors where sunlight and ground reflected light can shadow the vehicle. Shadows are areas with lower brightness than the background, and detection algorithms for shadows are studied based on the mechanism by which they are formed. In a traffic video detection scene, the brightness model of the pixel point (i + j) is as follows:
Si(x,y)=Ei(x,y)Ri(x,y)
wherein Si (x, y) represents the brightness of the pixel point (x, y) at the moment i, Ri (x, y) represents the reflection coefficient, and Ei (x, y) represents the illumination intensity received by the object surface unit. Ri (x, y) is typically small and can be considered constant. The formula for Ei (x, y) is as follows:
where CA and CP are ambient and luminance, respectively, N (x, y) represents the normal vector of the object surface, and L is the object table
And a vector facing the direction of the light source, wherein k (x, y) represents a loss coefficient of a penumbra formed by the sunlight which is not completely blocked by the object relative to the light energy when no shadow exists (k (x, y) ≦ 1 is more than 0). When k (x, y) is 0, the intensity of light corresponding to the dark shadow formed by the complete blocking of sunlight by the object is constant.
Although the shadow has the same motion property as a moving vehicle, the information such as texture, brightness, edge contour and the like is greatly different. The invention adopts a moving object shadow elimination method of HSV color space. The HSV color space adopts information of hue, saturation, brightness and the like of colors, and is closer to the color vision reflection of human, so that the information of the colors and the gray scale of the moving target and the shadow can be reflected more accurately.
The steps of the shadow elimination algorithm based on the HSV space are as follows:
the HSV space can be obtained by converting the RGB space, namely:
where H denotes hue, S denotes saturation, V denotes brightness, R denotes red, G denotes green, and B denotes blue. In the case of a shaded area in HSV color space, the hue H and saturation S are substantially unchanged relative to a non-shaded foreground area, the greatest difference being that the shaded area is much darker than the non-shaded area in brightness V. The formula is as follows:
where S (x, y) is a shadow mask of the foreground object image at coordinates (x, y), S (x, y) 1 represents a shadow,
is the threshold for saturation and color components respectively,and respectively representing three-channel component values after the K-th frame color image is converted into an HSV space.And respectively representing three-channel component values at (x, y) coordinates after the K-th frame dynamic color background image is converted into the HSV space. According to the HSV color space detection principle, the maximum difference between the shadow and the vehicle on the image is the difference in brightness, and shadow elimination is finally achieved. The method has great help to the accuracy of vehicle detection, eliminates the interference of shadow and improves the accuracy of vehicle detection.
The vehicle target tracking and track extracting component 3 finds and extracts vehicle targets in real time in a video sequence, continuously tracks the vehicle targets in a detection area, calculates and draws the motion track of the vehicle targets according to vehicle characteristics, and provides a data base for the subsequent motion analysis. The moving vehicle tracking of the invention utilizes a Kalman filtering tracking algorithm, designs a Kalman filtering model according to multi-feature matching, and extracts the vehicle movement track according to the vehicle center coordinate and the vehicle speed as matching features. The algorithm can predict the motion information of the next state of the vehicle moving target, can narrow the searching range of the moving vehicle, and improves the real-time performance and the reliable precision of the searching by the method. The method comprises the steps of utilizing Kalman filtering to track a vehicle, firstly selecting target characteristics, selecting the center coordinates of the vehicle as the target characteristics, and calculating the movement speed of a vehicle target by tracking the center coordinates. And then, matching of feature similarity is carried out, so that similar target vehicles can be conveniently searched. And finally, establishing parameter selection of a linear dynamic model and Kalman filtering. The method comprises the following specific steps:
(1) tracking vehicle centroid extraction
And a vehicle is framed by a circumscribed rectangle in the image processing, and the center of the rectangle is the center of mass of the vehicle. The coordinates of the diagonal points of the rectangle can be obtained according to the rectangle: lower left corner coordinate a (x0, y0), upper right corner coordinate B (x1, y 1).
The center coordinate can be obtained by aligning the coordinates of the corner pointsWhereinThe movement speed of the mass center can be obtained according to the mass centervy,To indicate the speed of movement of the vehicle in the x-axis direction, vyTo indicate the moving speed of the vehicle in the y-axis direction to estimate the moving speed of the moving vehicle. Whereink denotes a k-th frame image, and i denotes an i-th coordinate in the k-th frame image.
(2) Centroid matching method
Considering that the time difference between two consecutive frames is relatively short, the centroid of the vehicle does not generate a large position shift, so that the position of the same vehicle does not change much between two consecutive frames. Based on the above analysis, a vehicle matching rule is proposed: the centroid distance of the same vehicle between two consecutive frames does not change much. A threshold value may be set, and the distance between two points may be compared with the threshold value to determine whether the vehicles are the same vehicle. Assuming that the ith day mark in the k frame is to be tracked, the distances between all vehicle targets in the k +1 frame and the ith target in the next frame are calculated, assuming that the calculation is performed with the jth target in the k +1 frame:
wherein:representing the centroid coordinates of the ith vehicle object in the kth frame,the centroid coordinate of the jth vehicle target in the (k + 1) TH frame is represented, D (i, j) represents the distance between two adjacent vehicle targets, TH is a set distance threshold, if the distance is smaller than or equal to the threshold, the matching is considered to be the same vehicle successfully, and if the distance is larger than the threshold, the matching is considered to be not successful, and the matching is not the same vehicle.
(3) Selection of Kalman filter parameters
According to an actual video image, assuming that a vehicle target does uniform motion between two adjacent frames (the time difference between two continuous frames is small), a dynamic model in which kalman filtering is linear is set as follows:
the motion state vector is:
xk=[xk,yk,vx,k,vy,k]T
wherein xk and yk are coordinates of the mass center of the vehicle, and vx, k, vy and k represent the movement speed of the coordinates of the mass center of the vehicle;
the transfer matrix Ak is:
Δ t represents the time interval between two frames.
The observation matrix Ck is:
the covariance matrix of white gaussian noise Wk-1, Vk is:
estimation error:
tracking a vehicle can be divided into the following steps:
firstly, extracting and calculating the coordinates of the mass center of the vehicle by using the tracking result of the vehicle;
initializing a Kalman filter;
carrying out vehicle mass center matching on the current frame vehicle target and the previous frame;
fourthly, after the matching is successful, updating the filter to record the current vehicle information to prepare for the next matching and tracking;
(4) trajectory display
The motion trail of the vehicle center of mass can be obtained by matching and tracking the target in the multi-frame image in the traffic video. Firstly, a tracking area is set in a video, tracking is started after a moving vehicle enters the tracking area, and coordinates of the mass center of each vehicle are recorded, so that a tracking sequence is obtained, and the following definition is as follows:
Trackcar=[(xi,yi);(xi+1,yi+1);(xi+2,yi+2);...(xi+n,yi+n)]
when the trajectories of a plurality of continuous vehicles in a certain area change and the changes are similar, the situation that an obstacle or a pothole exists in the area can be judged, and the existence of the obstacle in the front can be judged through the vehicle trajectories.
The obstacle early warning component 4 sends out early warning information according to the vehicle track determined by the vehicle target tracking and track extracting component 3, wherein the early warning information comprises an image and a sound. After the vehicle target tracking and track extracting component 3 finishes extracting the motion track of the vehicle, the vehicle track information is transmitted to the early warning component, and the vehicle track is displayed at the computer terminal, so that the image information early warning of the barrier is finished. And writing the vehicle track result into a text file, and converting the text information into voice by using a voice synthesis technology. A voice synthesis chip is adopted and connected with the raspberry group through a serial communication port, and codes are compiled by C language to realize a voice broadcasting function. The speech synthesis is mainly divided into two parts of text analysis and speech synthesis. The system firstly carries out preprocessing such as segmentation, sound velocity labeling and the like on the obtained text to be synthesized according to the vocabulary library and the characteristic vocabulary library, then processes the segmented symbol stream by decoding the sound change rule, obtains the sound stream and then obtains output through voice synthesis, and therefore sound early warning information is sent out.
The foregoing is merely exemplary and illustrative of the present invention and various modifications, additions and substitutions may be made by those skilled in the art to the specific embodiments described without departing from the scope of the present invention as defined in the accompanying claims.
Claims (5)
1. The intelligent recognition is to track the vehicle to obtain the motion trail of the vehicle and judge whether the position of the road has obstacles, traffic accidents and maintenance construction conditions according to the motion trail.
2. The intelligent vehicle track-based road obstacle recognition device as recited in claim 1, wherein the image acquisition component is a camera and a column, and the image information acquired comprises video and snap pictures.
3. The intelligent vehicle-track-based road obstacle recognition device as recited in claim 1, wherein the vehicle object detection component mainly implements background extraction and vehicle extraction, and the background extraction steps are as follows: in an ideal scene, a moving vehicle is regarded as noise in a background image, the brightness of a road surface and the brightness of the vehicle in the scene are different due to the diversity of the vehicles, and when the brightness of the vehicle is compared with the brightness of the road surface, the following conditions occur: some vehicles have higher brightness than the road surface, and some vehicles have lower brightness than the road surface, and the method of accumulating and then averaging a plurality of images is adopted for noise elimination, so that the background of the scene is obtained by the method of accumulating and then averaging continuous images in a section of video; the formula is as follows:
wherein Background (x, y) is a Background image in the video, Ii (x, y) is an ith frame image, and N represents the number of extracted Background video frames;
the method for extracting the moving vehicles by adopting the color background difference method comprises the following steps:
(1) color background difference:
the difference is respectively made between the ith frame color image fi and the color background image BGi in three channels R, G, B, so that a foreground moving object fgi can be extracted, and the formula is as follows:
the image is still composed of RGB three primary colors after the color background difference, the image is subjected to graying processing for simplifying the operation, and finally the foreground of the vehicle moving target in the image can be extracted through threshold segmentation;
(2) morphological treatment:
the foreground target obtained by the color background difference method needs further processing to extract a relatively complete moving vehicle target, and the binary image is processed by adopting morphology and block filling to eliminate noise and holes; the most basic morphological operations are erosion and dilation, with an open operation of erosion before dilation and a closed operation of dilation before erosion;
vehicle shadow removal:
in the daytime, when strong sunlight irradiates, the vehicle can cause shadows on the road surface due to the projection of the sunlight, the shadows are areas with lower brightness than the background, and in a traffic video detection scene, the brightness model of the pixel point (i + j) is as follows:
Si(x,y)=Ei(x,y)Ri(x,y)
wherein Si (x, y) represents the luminance of the pixel point (x, y) at time i, Ri (x, y) represents the reflection coefficient, Ei (x, y) represents the illumination intensity received by the object surface unit, Ri (x, y) is generally small and can be considered as a constant, and the calculation formula of Ei (x, y) is as follows:
wherein CA and CP are environment and brightness respectively, N (x, y) represents the normal vector of the object surface, L is the vector from the object surface to the light source direction, k (x, y) represents the loss coefficient of the light energy when the half shadow formed by the sunlight which is not completely shielded by the object is relative to the unshaded shadow (k (x, y) is more than or equal to 0 and less than or equal to 1); when k (x, y) is 0, the light intensity corresponding to the fact that sunlight is completely shielded by an object to form a dark shadow is constant;
the method for eliminating the shadow of the moving target by adopting the HSV color space adopts information such as hue, saturation, brightness and the like of colors, and is more close to the color vision reflection of human, so that the color and gray information of the moving target and the shadow can be more accurately reflected;
the steps of the shadow elimination algorithm based on the HSV space are as follows:
the HSV space can be obtained by converting the RGB space, namely:
wherein, H represents hue, S represents saturation, V represents brightness, R represents red, G represents green, B represents blue, in the shade area in HSV color space, the hue H and the saturation S are substantially unchanged relative to the non-shade foreground area, the biggest difference is that in the brightness V, the shade area is much darker than the non-shade area, the formula is as follows:
where S (x, y) is a shadow mask of the foreground object image at coordinates (x, y), S (x, y) 1 represents a shadow,
is the threshold for saturation and color components respectively,respectively representing three-channel component values after the K-th frame color image is converted into an HSV space,and respectively representing three-channel component values at (x, y) coordinates after the K-th frame dynamic color background image is converted into the HSV space.
4. The device for intelligently identifying the road surface obstacle based on the vehicle track as claimed in claim 1, wherein the vehicle target tracking and track extracting component is used for vehicle tracking, namely, finding and extracting vehicle targets in a video sequence in real time, continuously tracking the vehicle targets in a detection area, calculating and drawing the motion track of the vehicle targets according to vehicle characteristics, providing a data base for later motion analysis, designing a Kalman filtering tracking model according to multi-characteristic matching by utilizing a Kalman filtering tracking algorithm, extracting the motion track of the vehicle according to the vehicle center coordinates and the vehicle speed as matching characteristics, performing vehicle tracking by Kalman filtering, firstly selecting target characteristics, selecting the vehicle center coordinates as the target characteristics, calculating the motion speed of the vehicle targets by tracking the center coordinates, and then performing characteristic shape matching, the method comprises the following specific steps:
(1) tracking vehicle centroid extraction
The vehicle is framed by a circumscribed rectangle in image processing, the center of the rectangle is the center of mass of the vehicle, and the coordinates of the diagonal points of the rectangle can be obtained according to the rectangle: lower left corner coordinate a (x0, y0), upper right corner coordinate B (x1, y 1);
the center coordinate can be obtained by aligning the coordinates of the corner pointsWhereinThe movement speed of the mass center can be obtained according to the mass centerTo indicate the speed of movement of the vehicle in the x-axis direction, vyTo express a moving speed of the vehicle in a y-axis direction to estimate a moving speed of the moving vehicle, whereink represents the k frame image, i represents the ith coordinate in the k frame image;
(2) centroid matching method
Vehicle matching rules: the centroid distance of the same vehicle between two consecutive frames does not change much, a threshold value can be set, the distance between the two points is compared with the threshold value to judge whether the vehicle is the same vehicle, if the ith day mark in the k frame needs to be tracked, the distances between all vehicle targets in the k +1 frame and the ith target in one frame need to be calculated, and if the distance is calculated with the jth target in the k +1 frame:
wherein:representing the centroid coordinates of the ith vehicle object in the kth frame,representing the centroid coordinate of the jth vehicle target in the (k + 1) TH frame, | D (i, j) | representing the distance between two adjacent vehicle targets, TH being a set distance threshold, if less than or equal to the threshold, the matching is regarded as the same vehicle successfully, and if greater than the threshold, the matching is regarded as unsuccessful, and the vehicle is not the same vehicle;
(3) selection of Kalman filter parameters
According to an actual video image, assuming that a vehicle target does uniform motion between two adjacent frames (the time difference between two continuous frames is small), a dynamic model in which kalman filtering is linear is set as follows:
the motion state vector is:
xk=[xk,yk,vx,k,vy,k]T
wherein xk and yk are coordinates of the mass center of the vehicle, and vx, k, vy and k represent the movement speed of the coordinates of the mass center of the vehicle;
the transfer matrix Ak is:
Δ t represents the time interval between two frames;
the observation matrix Ck is:
the covariance matrix of white gaussian noise Wk-1, Vk is:
estimation error:
tracking a vehicle can be divided into the following steps:
firstly, extracting and calculating the coordinates of the mass center of the vehicle by using the tracking result of the vehicle;
initializing a Kalman filter;
carrying out vehicle mass center matching on the current frame vehicle target and the previous frame;
fourthly, after the matching is successful, updating the filter to record the current vehicle information to prepare for the next matching and tracking;
(4) trajectory display
Firstly, a tracking area is set in a video, when a moving vehicle enters the tracking area, the tracking is started, and the coordinates of the mass center of each vehicle are recorded, so that a tracking sequence is obtained, wherein the tracking sequence is defined as follows:
Trackcar=[(xi,yi);(xi+1,yi+1);(xi+2,yi+2);...(xi+n,yi+n)]
wherein, (x, y) represents the coordinates of the centroid of the vehicle in the ith frame, n represents the number of times that the same vehicle is continuously tracked, and then the coordinates are connected to form the movement track of the centroid of the vehicle.
5. The intelligent vehicle track-based road obstacle recognition device as claimed in claim 1, wherein the obstacle warning component sends warning information according to the vehicle track determined by the vehicle target tracking and track extraction component, the warning information includes two parts of image and sound, the vehicle target tracking and track extraction component sends the vehicle track information to the warning component after completing vehicle motion track extraction, the vehicle track is displayed on the computer terminal, thereby completing image information warning of the obstacle, the vehicle track result is written into a text file, then the text information is converted into voice by using a voice synthesis technology, a voice synthesis chip is adopted, the device is connected with the raspberry group through a serial communication port, a C language coding code is used for realizing a voice broadcasting function, and the voice synthesis is mainly divided into two parts of text analysis and voice synthesis, the system firstly carries out preprocessing such as segmentation, sound velocity labeling and the like on the obtained text to be synthesized according to the vocabulary library and the characteristic vocabulary library, then processes the segmented symbol stream by decoding the sound change rule, obtains the sound stream and then obtains output by voice synthesis.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110387405.5A CN113077494A (en) | 2021-04-10 | 2021-04-10 | Road surface obstacle intelligent recognition equipment based on vehicle orbit |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110387405.5A CN113077494A (en) | 2021-04-10 | 2021-04-10 | Road surface obstacle intelligent recognition equipment based on vehicle orbit |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113077494A true CN113077494A (en) | 2021-07-06 |
Family
ID=76617244
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110387405.5A Pending CN113077494A (en) | 2021-04-10 | 2021-04-10 | Road surface obstacle intelligent recognition equipment based on vehicle orbit |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113077494A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113888573A (en) * | 2021-09-26 | 2022-01-04 | 同济大学 | Method and system for generating virtual and real fusion video of traffic scene |
CN113920728A (en) * | 2021-10-11 | 2022-01-11 | 南京微达电子科技有限公司 | Detection and early warning method and system for obstacles thrown on expressway |
CN114779180A (en) * | 2022-06-20 | 2022-07-22 | 成都瑞达物联科技有限公司 | Multipath interference mirror image target filtering method for vehicle-road cooperative radar |
CN114913210A (en) * | 2022-07-19 | 2022-08-16 | 山东幻科信息科技股份有限公司 | Motion trajectory identification method, system and equipment based on AI (Artificial Intelligence) visual algorithm |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103914688A (en) * | 2014-03-27 | 2014-07-09 | 北京科技大学 | Urban road obstacle recognition system |
CN106875424A (en) * | 2017-01-16 | 2017-06-20 | 西北工业大学 | A kind of urban environment driving vehicle Activity recognition method based on machine vision |
CN109557920A (en) * | 2018-12-21 | 2019-04-02 | 华南理工大学广州学院 | A kind of self-navigation Jian Tu robot and control method |
CN210760742U (en) * | 2018-09-21 | 2020-06-16 | 湖北大学 | Intelligent vehicle auxiliary driving system |
-
2021
- 2021-04-10 CN CN202110387405.5A patent/CN113077494A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103914688A (en) * | 2014-03-27 | 2014-07-09 | 北京科技大学 | Urban road obstacle recognition system |
CN106875424A (en) * | 2017-01-16 | 2017-06-20 | 西北工业大学 | A kind of urban environment driving vehicle Activity recognition method based on machine vision |
CN210760742U (en) * | 2018-09-21 | 2020-06-16 | 湖北大学 | Intelligent vehicle auxiliary driving system |
CN109557920A (en) * | 2018-12-21 | 2019-04-02 | 华南理工大学广州学院 | A kind of self-navigation Jian Tu robot and control method |
Non-Patent Citations (2)
Title |
---|
彭琳钰: ""基于卷积神经网络的障碍物识别系统设计"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
高冬冬: ""基于车辆跟踪轨迹的停车和逆行检测研究"", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113888573A (en) * | 2021-09-26 | 2022-01-04 | 同济大学 | Method and system for generating virtual and real fusion video of traffic scene |
CN113920728A (en) * | 2021-10-11 | 2022-01-11 | 南京微达电子科技有限公司 | Detection and early warning method and system for obstacles thrown on expressway |
CN113920728B (en) * | 2021-10-11 | 2022-08-12 | 南京微达电子科技有限公司 | Detection and early warning method and system for obstacles thrown on highway |
CN114779180A (en) * | 2022-06-20 | 2022-07-22 | 成都瑞达物联科技有限公司 | Multipath interference mirror image target filtering method for vehicle-road cooperative radar |
CN114913210A (en) * | 2022-07-19 | 2022-08-16 | 山东幻科信息科技股份有限公司 | Motion trajectory identification method, system and equipment based on AI (Artificial Intelligence) visual algorithm |
CN114913210B (en) * | 2022-07-19 | 2023-02-24 | 山东幻科信息科技股份有限公司 | Motion trajectory identification method, system and equipment based on AI (Artificial Intelligence) visual algorithm |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108596129B (en) | Vehicle line-crossing detection method based on intelligent video analysis technology | |
CN113077494A (en) | Road surface obstacle intelligent recognition equipment based on vehicle orbit | |
Gong et al. | The recognition and tracking of traffic lights based on color segmentation and camshift for intelligent vehicles | |
CN106875424B (en) | A kind of urban environment driving vehicle Activity recognition method based on machine vision | |
CN103400157B (en) | Road pedestrian and non-motor vehicle detection method based on video analysis | |
CN104318258B (en) | Time domain fuzzy and kalman filter-based lane detection method | |
CN101872546B (en) | Video-based method for rapidly detecting transit vehicles | |
CN111369541A (en) | Vehicle detection method for intelligent automobile under severe weather condition | |
CN106682586A (en) | Method for real-time lane line detection based on vision under complex lighting conditions | |
CN109902592B (en) | Blind person auxiliary walking method based on deep learning | |
CN101739686A (en) | Moving object tracking method and system thereof | |
CN103136537B (en) | Vehicle type identification method based on support vector machine | |
CN105488811A (en) | Depth gradient-based target tracking method and system | |
CN109685827B (en) | Target detection and tracking method based on DSP | |
CN105005766A (en) | Vehicle body color identification method | |
CN113763427B (en) | Multi-target tracking method based on coarse-to-fine shielding processing | |
CN106204594A (en) | A kind of direction detection method of dispersivity moving object based on video image | |
CN108734132A (en) | The method for establishing pedestrian's recognition classifier of vehicle-mounted pedestrian detection | |
CN108520528B (en) | Mobile vehicle tracking method based on improved difference threshold and displacement matching model | |
Buch et al. | Vehicle localisation and classification in urban CCTV streams | |
CN110533692B (en) | Automatic tracking method for moving target in aerial video of unmanned aerial vehicle | |
Xia et al. | Automatic multi-vehicle tracking using video cameras: An improved CAMShift approach | |
FAN et al. | Robust lane detection and tracking based on machine vision | |
CN109102520A (en) | The moving target detecting method combined based on fuzzy means clustering with Kalman filter tracking | |
Michael et al. | Fast change detection for camera-based surveillance systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210706 |