CN109711264B - Method and device for detecting occupation of bus lane - Google Patents

Method and device for detecting occupation of bus lane Download PDF

Info

Publication number
CN109711264B
CN109711264B CN201811452012.2A CN201811452012A CN109711264B CN 109711264 B CN109711264 B CN 109711264B CN 201811452012 A CN201811452012 A CN 201811452012A CN 109711264 B CN109711264 B CN 109711264B
Authority
CN
China
Prior art keywords
area
bus
vehicle
preset
license plate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811452012.2A
Other languages
Chinese (zh)
Other versions
CN109711264A (en
Inventor
汪超洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Fenghuo Zhongzhi Wisdom Star Technology Co ltd
Original Assignee
Wuhan Fenghuo Zhongzhi Wisdom Star Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Fenghuo Zhongzhi Wisdom Star Technology Co ltd filed Critical Wuhan Fenghuo Zhongzhi Wisdom Star Technology Co ltd
Priority to CN201811452012.2A priority Critical patent/CN109711264B/en
Publication of CN109711264A publication Critical patent/CN109711264A/en
Application granted granted Critical
Publication of CN109711264B publication Critical patent/CN109711264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method and a device for detecting the occupation of a bus lane, wherein the method comprises the following steps: obtaining the position of a bus; judging whether the position of the bus is in a preset bus lane area or not; if the position of the bus is in the special lane area of the bus, acquiring an original image acquired by a camera; determining a bus lane line area and a moving target area in an original image; under the condition that the size of the moving target area is within the size range of a preset vehicle area, judging whether the area of an overlapping area of a bus lane area and the moving target area is larger than a preset threshold value or not; and if the area of the overlapping area is larger than a preset threshold value, determining a target detection area based on the overlapping area, detecting the vehicle in the target detection area, and after the vehicle area is detected, identifying the license plate of the vehicle area to obtain the license plate number of the vehicle in the vehicle area. By applying the embodiment of the invention, the detection accuracy is improved in the process of detecting the occupation of the bus lane.

Description

Method and device for detecting occupation of bus lane
Technical Field
The invention relates to the technical field of traffic violation detection, in particular to a method and a device for detecting occupation of a bus lane.
Background
The bus lane is a special lane for bus driving, and is usually only used for bus driving, and if other vehicles, such as motorcycles, tricycles or cars and other non-buses, drive on the bus lane, the bus lane belongs to illegal road occupation. In order to ensure normal running of the urban bus, the occupation condition of the bus lane can be detected.
The inventor finds that, in the process of implementing the invention, at present, the bus lane occupation detection method generally judges that the bus lane is occupied by the vehicle after detecting that the bus lane line area in the image is blocked, and the detection accuracy is not high because the bus lane line area is possibly blocked by the vehicle and objects on roads such as people, animals, road railings and the like.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a method and a device for detecting the occupation of a bus lane so as to improve the detection accuracy.
The invention is realized by the following steps:
in a first aspect, the invention provides a method for detecting the occupation of a bus lane, wherein a camera is installed at the front part of a bus, and the method comprises the following steps:
obtaining the position of the bus; judging whether the position of the bus is in a preset bus lane area or not;
if the position of the bus is in the bus lane area, acquiring an original image acquired by the camera; determining a bus lane line area and a moving target area in the original image;
under the condition that the size of the moving target area is within the size range of a preset vehicle area, judging whether the area of an overlapping area of the bus lane area and the moving target area is larger than a preset threshold value or not;
and if the area of the overlapping area is larger than a preset threshold value, determining a target detection area based on the overlapping area, detecting vehicles in the target detection area, and after detecting the vehicle area, identifying license plates of the vehicle area to obtain license plate numbers of the vehicles in the vehicle area.
Optionally, after obtaining the license plate number, the method further includes:
and sending the original image, the license plate number, the position of the bus and the acquisition time to a management platform.
Optionally, the size of the moving target area is not within a preset vehicle area size range, or the area of an undetected vehicle area or an overlapping area in the target detection area is not greater than a preset threshold, and the method further includes:
and acquiring a next image of the original image, taking the next image as the original image, and continuously determining a bus lane area and a moving target area in the original image.
Optionally, determining the bus lane area in the original image includes:
identifying a bus lane line in the original image;
and expanding the areas on two sides of the bus lane according to a preset angle and a preset width to obtain the bus lane area.
Optionally, identifying the bus lane line in the original image includes:
extracting a region of interest in the original image;
establishing a sparse grid image, and taking an intersection of the sparse grid image and the region of interest to obtain an intersection image; performing color filtering on the intersection image to obtain a feature map only retaining the colors of preset bus lane lines;
extracting the bus lane edge feature points in the feature map by using an edge detection algorithm;
performing image expansion processing on the characteristic points of the edges of the bus lane lines to obtain the range area of the bus lane lines;
and carrying out straight line detection on the range area where the bus lane line is located, and taking the detected straight line as the bus lane line in the original image.
Optionally, determining a target detection area based on the overlap area includes:
and determining a rectangular area taking the geometric center point of the overlapping area as a center as a target detection area, wherein the width of the rectangular area is a preset detection width, the height of the rectangular area is a product of a preset multiple and a target ratio, and the target ratio is a ratio of a longitudinal coordinate of the geometric center point of the overlapping area to a longitudinal total pixel of the area of interest.
Optionally, the vehicle detection in the target detection area includes:
zooming the target detection area to a preset size;
determining each image to be detected from the zoomed target detection area;
inputting the image to be detected into a target convolutional neural network aiming at each image to be detected to obtain a classification result of whether a vehicle exists in the image to be detected or not; the target convolutional neural network is obtained by pre-training a preset initial convolutional neural network by using a first training sample set;
and combining all the images to be detected with the vehicle in the images to be detected according to the classification result to obtain the vehicle area in the target detection area.
Optionally, the first training sample set includes a positive sample set including images containing vehicles and a negative sample set including images containing no vehicles.
Optionally, identifying the license plate of the vehicle region to obtain the license plate number of the vehicle in the vehicle region includes:
carrying out color screening on the vehicle area to obtain a first license plate area containing a preset license plate color;
performing morphological filtering on the first license plate area, and performing character positioning on the first license plate area subjected to the morphological filtering to obtain a character area;
inputting the character area to a target SVM network to obtain a judgment result, wherein the judgment result is that the character area belongs to a license plate area or does not belong to the license plate area; the target SVM network is obtained by pre-training a preset initial SVM network by using a second training sample set;
and performing character recognition on the character area with the judgment result of belonging to the license plate area, and taking the recognized character as the license plate number of the vehicle in the vehicle area.
In a second aspect, the present invention provides a method and a device for detecting the occupation of a bus lane, wherein a camera is installed at the front part of a bus, and the device comprises:
the first obtaining module is used for obtaining the position of the bus; judging whether the position of the bus is in a preset bus lane area or not;
the determining module is used for acquiring an original image acquired by the camera if the position of the bus is in the special lane area of the bus; determining a bus lane line area and a moving target area in the original image;
the judging module is used for judging whether the area of an overlapping area of the bus lane area and the moving target area is larger than a preset threshold value or not under the condition that the size of the moving target area is within a preset vehicle area size range;
and the detection module is used for determining a target detection area based on the overlapping area when the judgment result of the judgment module is yes, detecting the vehicle in the target detection area, and after the vehicle area is detected, identifying the license plate of the vehicle in the vehicle area to obtain the license plate number of the vehicle in the vehicle area.
Optionally, the apparatus further includes a sending module, configured to:
and after the license plate number is obtained, sending the original image, the license plate number, the position of the bus and the acquisition time to a management platform.
Optionally, the apparatus further includes a second obtaining module, configured to:
and if the size of the moving target area is not in the size range of a preset vehicle area or the area of an undetected vehicle area or an overlapped area in the target detection area is not larger than a preset threshold value, acquiring a next image of the original image, taking the next image as the original image, and continuously determining the bus lane area and the moving target area in the original image.
Optionally, the determining module determines the bus lane area in the original image, specifically:
identifying a bus lane line in the original image;
and expanding the areas on two sides of the bus lane according to a preset angle and a preset width to obtain the bus lane area.
Optionally, the determining module identifies the bus lane line in the original image, specifically:
extracting a region of interest in the original image;
establishing a sparse grid image, and taking an intersection of the sparse grid image and the region of interest to obtain an intersection image; performing color filtering on the intersection image to obtain a feature map only retaining the colors of preset bus lane lines;
extracting the bus lane edge feature points in the feature map by using an edge detection algorithm;
performing image expansion processing on the characteristic points of the edges of the bus lane lines to obtain the range area of the bus lane lines;
and carrying out straight line detection on the range area where the bus lane line is located, and taking the detected straight line as the bus lane line in the original image.
Optionally, the detecting module determines the target detection area based on the overlap area, specifically:
and determining a rectangular area taking the geometric center point of the overlapping area as a center as a target detection area, wherein the width of the rectangular area is a preset detection width, the height of the rectangular area is a product of a preset multiple and a target ratio, and the target ratio is a ratio of a longitudinal coordinate of the geometric center point of the overlapping area to a longitudinal total pixel of the area of interest.
Optionally, the detection module performs vehicle detection in the target detection area, specifically:
zooming the target detection area to a preset size;
determining each image to be detected from the zoomed target detection area;
inputting the image to be detected into a target convolutional neural network aiming at each image to be detected to obtain a classification result of whether a vehicle exists in the image to be detected or not; the target convolutional neural network is obtained by pre-training a preset initial convolutional neural network by using a first training sample set;
and combining all the images to be detected with the vehicle in the images to be detected according to the classification result to obtain the vehicle area in the target detection area.
Optionally, the first training sample set includes a positive sample set including images containing vehicles and a negative sample set including images containing no vehicles.
Optionally, the detecting module performs license plate recognition on the vehicle region to obtain a license plate number of a vehicle in the vehicle region, and specifically includes:
carrying out color screening on the vehicle area to obtain a first license plate area containing a preset license plate color;
performing morphological filtering on the first license plate area, and performing character positioning on the first license plate area subjected to the morphological filtering to obtain a character area;
inputting the character area to a target SVM network to obtain a judgment result, wherein the judgment result is that the character area belongs to a license plate area or does not belong to the license plate area; the target SVM network is obtained by pre-training a preset initial SVM network by using a second training sample set;
and performing character recognition on the character area with the judgment result of belonging to the license plate area, and taking the recognized character as the license plate number of the vehicle in the vehicle area.
The invention has the following beneficial effects: by applying the embodiment of the invention, under the condition that the size of the moving target area is within the size range of the preset vehicle area and the area of the overlapping area is larger than the preset threshold value, the target detection area is determined based on the overlapping area, and then the vehicle detection is carried out in the target detection area, the moving target which is obviously not a vehicle is eliminated, unnecessary detection processes are reduced, the detection accuracy and the detection efficiency are improved, and only after the vehicle area is detected, the license plate recognition is carried out on the vehicle area, so that the detection of non-vehicles is further avoided, the detection accuracy is improved, the image processing area for license plate recognition is reduced, and the recognition efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a bus lane occupation detection method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a bus lane occupation detection device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the method for detecting the occupation of the bus lane provided by the present invention can be applied to electronic devices, wherein in specific applications, the electronic devices can be computers, personal computers, tablets, mobile phones, servers, and the like, which are all reasonable. In addition, the functional software for realizing the method for detecting the occupation of the bus lane provided by the embodiment of the invention can be special bus lane occupation detection software and can also be a plug-in the software with the function of detecting the occupation of the bus lane.
Referring to fig. 1, an embodiment of the present invention provides a method for detecting a bus lane occupation, including the following steps:
s101, acquiring the position of a bus; judging whether the position of the bus is in a preset bus lane area or not;
the position of the bus may be acquired by a Positioning device installed in the bus or placed in the bus, and the Positioning device may be a GPS (Global Positioning System) device, a Radio Frequency Identification (RFID) Positioning device, or other devices having a Positioning function.
The bus lane area can be determined in advance, when the position of the bus is in the preset bus lane area, the bus is in a running state or a waiting running state, under the condition, in order to guarantee normal running of the bus, the bus lane is prevented from being occupied by other non-buses, and the lane occupying condition of the bus lane can be detected.
In order to collect the image of the road in front, the camera can be installed at the front part of the bus, and further, in order to effectively collect the range area of the bus lane and avoid the shielding of the common bus or the bus on the sight line, the camera can be installed at the center of the top of the windshield of the bus and is downward at a certain angle. The number of the cameras can be one or more, and the type can be a monocular camera or a binocular camera.
S102, if the position of the bus is in a bus lane area, acquiring an original image acquired by the camera; determining a bus lane line area and a moving target area in the original image;
if the position of the bus is judged to be in the bus lane area, the camera can be started, and therefore the camera collects the front road information to obtain an original image. Or, the camera always collects the original image after the bus is in the starting state, and the electronic device (the execution main body of the invention) starts to acquire the original image collected by the camera after judging that the position of the bus is in the special lane area of the bus.
After the electronic device obtains the original image, the electronic device can perform moving target detection on the original image to obtain a moving target area in the original image. The moving target area is an area where the moving target is located, and the moving target can be a movable object such as a person, a vehicle, a leaf, an animal and the like. The moving object detection can be realized by adopting an interframe difference algorithm or a background subtraction algorithm.
The bus lane line region in the original image can be determined specifically by the following method:
step A, identifying a bus lane line in an original image;
and B, expanding the areas on the two sides of the bus lane line according to a preset angle and a preset width to obtain the bus lane line area.
The bus lane line region is a region containing a bus lane line in the original image. One or two bus lane lines may be present in the original image, or there may be no bus lane lines in the original image, and the areas on both sides of each bus lane line can be expanded under the condition that there are bus lane lines in the original image. The preset angle and the preset width may be preset, but the present invention is not limited thereto. For example, the preset angle may be 30 °, 40 °, 50 °, 60 °, etc., and the preset width may be 5/6/7/8 pixel widths, etc.
Identifying the bus lane line in the original image specifically may include the following steps:
step A1, extracting a region of interest in the original image;
the region of interest is an image region preset in the original image, the image region is a key point concerned by image analysis, for example, the region can be a bus lane line and a possible region of a vehicle with violation lane occupation, most interference in the original image can be eliminated by setting the region of interest, the original image to be processed is changed from a large image to a small image, the algorithm processing range is reduced, and therefore the processing time is reduced. The region of interest may be a preset rectangular region, and the preset rectangular region may be: the coordinates of the four vertices of the rectangular area are preset, or the pixel row/pixel column range of interest may be preset.
Step A2, establishing a sparse grid image, and taking an intersection of the sparse grid image and the region of interest to obtain an intersection image; performing color filtering on the intersection image to obtain a feature map only retaining the colors of the preset bus lane lines;
the sparse grid image is composed of grid cells, the grid cells can be square or hexagonal, the size of each grid cell is the same, and the specific size can be preset. The intersection of the sparse grid image and the region of interest is taken, image information except the grid lines can be eliminated, redundant calculation is reduced, a large amount of environmental interference (such as sky, roadside fences, trees and the like) can be eliminated by the grid unit, and the pixels of the bus lane lines and the gray distribution characteristics of the pixels are reserved. The color of the preset bus lane line can be preset according to the actual color of the bus lane line in real life, for example, the actual color of the bus lane line is yellow, and the color of the preset bus lane line is yellow.
In order to conveniently and reasonably extract the characteristic points of the bus lane line edge, the size of a grid unit can be set according to the resolution of an original image, and when the resolution of the original image is higher, the size of the grid unit can be set to be larger, so that the reduction of the processing speed due to the excessive extracted characteristic points of the bus lane line is avoided; when the resolution of the original image is smaller, the size of the grid unit can be set smaller, so that the situation that follow-up processing cannot be carried out due to the fact that the number of extracted bus lane line characteristic points is too small is avoided.
An HSV (hue validation value) color model can be used for carrying out color filtering on the intersection image to obtain a feature map containing the colors of the bus lines in the intersection image. The parameters of the colors in this model are: and the hue (H), the saturation (S) and the lightness (V) can be subjected to color filtering by adopting a color histogram method, and if the bus lane is yellow, yellow areas in the intersection images are reserved and non-yellow areas in the intersection images are filtered out after the color filtering, so that the feature diagram is formed.
A3, extracting the edge feature points of the bus lane lines in the feature map by using an edge detection algorithm;
the edge detection algorithm can detect pixels with step changes in pixel gray levels in the image, and the set of the pixels is the edge feature point of the image. The edge detection algorithm may be a Canny edge detection algorithm, a Sobel edge detection algorithm, a Laplace edge detection algorithm, or the like. The embodiment of the invention does not limit the specifically adopted edge detection algorithm.
In order to facilitate rapid extraction of the bus lane edge feature points, the feature map can be symmetrically divided into a left area and a right area, the bus lane edge feature points in the left area and the right area are respectively extracted by using a Sobel edge detection algorithm, the Sobel edge detection with Sobel operator template directions of 0 °, 45 °, 90 °, 180 °, 225 °, and 270 ° and weights of (1, 2, 1, 1, 1, 2, 1) is adopted in the left half area of the feature map, and the gray value of each pixel in the left half area of the feature map is obtained. And (3) performing Sobel edge detection on the right half area of the feature map by adopting Sobel operator templates with the directions of 0 degrees, 90 degrees, 135 degrees, 180 degrees, 270 degrees and 315 degrees and the weights of (1, 1, 2, 1, 1, 1 and 2) respectively to obtain the gray value of each pixel in the right half area of the feature map, and if the difference value between the gray value of a certain pixel and the gray values of surrounding pixels is greater than a preset difference value, taking the pixel as a bus lane edge feature point. Theoretically, the template direction of the Sobel operator is 0-360 degrees, and by applying the embodiment of the invention, the template direction to be calculated is reduced, and the calculation efficiency is improved. The preset difference may be set in advance, and may be, for example, 10, 20, 30, and so on.
A4, performing image expansion processing on the characteristic points of the edges of the bus lane lines to obtain the range areas of the bus lane lines;
the number of the bus lane edge feature points can be multiple, and image expansion processing can be performed on each bus lane edge feature point, so that the range area where the bus lane is located is obtained. Specifically, an elliptical template, a square template, a circular template or the like can be used for carrying out image expansion processing on the characteristic points of the edges of the bus lane lines.
And A5, carrying out straight line detection on the range area where the bus lane line is located, and taking the detected straight line as the bus lane line in the original image.
Since the inclination angle of the bus lane is usually in the angle range of-75 ° to-40 ° and 40 ° to 75 °, the above-mentioned straight line detection of the inclination angle in the angle range can be performed for the range region where the bus lane is located. In order to accelerate the detection efficiency, the range area where the bus lane line is located can be divided into a left half lane line area and a right half lane line area, and because the camera acquires the front road image, the slope of the bus lane line in the left half lane line area is positive, and the slope of the bus lane line in the right half lane line area is negative, the straight line detection with the inclination angle of 40-75 degrees can be carried out on the left half lane line area, and the straight line detection with the inclination angle of-75-40 degrees can be carried out on the right half lane line area.
Specifically, a Hough linear detection algorithm, a Freeman linear detection algorithm, an inchworm crawling algorithm or other linear detection algorithms can be adopted to perform linear detection on the left half lane line area and the right half lane line area respectively.
S103, under the condition that the size of the moving target area is within a preset vehicle area size range, judging whether the area of an overlapping area of the bus lane area and the moving target area is larger than a preset threshold value or not;
it can be understood that the size of the vehicle area in the image is usually within a certain range, and by setting the preset vehicle area size range, moving objects obviously not being vehicles, such as birds, leaves, people and the like, can be excluded, so that unnecessary detection processes are reduced, and the processing efficiency is improved.
The overlapping area may be considered as an area where the bus lane line is blocked by the moving object. The preset threshold value can be set in advance according to experience, when the area of the overlapping area is larger than the preset threshold value, the bus lane can be considered to be possibly occupied by the moving target in the moving target area, and when the area of the overlapping area is not larger than the preset threshold value, the bus lane can be considered to be not occupied by the moving target in the moving target area.
S104, if the area of the overlapping area is larger than a preset threshold value, determining a target detection area based on the overlapping area, carrying out vehicle detection in the target detection area, and carrying out license plate recognition on the vehicle area after the vehicle area is detected to obtain the license plate number of the vehicle in the vehicle area.
In order to more reasonably analyze the possible existence area of the vehicle, in one implementation, the determining the target detection area based on the overlapping area comprises:
and determining a rectangular area taking the geometric center point of the overlapping area as a center as a target detection area, wherein the width of the rectangular area is a preset detection width, the height of the rectangular area is a product of a preset multiple and a target ratio, and the target ratio is a ratio of a longitudinal coordinate of the geometric center point of the overlapping area to a longitudinal total pixel of the area of interest.
The preset detection width may be a maximum pixel width of the lane in the image or a product of the maximum pixel width and a preset width ratio, and the preset width ratio may be 1.8, 1.9, 2.0, and so on. For example, the maximum pixel width of the lane in the image is X1 pixels, the preset detection width may be X1, or the preset width ratio is 1.9, and the preset detection width may be 1.9 × X1. The maximum pixel width of the lane in the original image may be measured in advance with a total-pixel-number measuring tool. The predetermined multiple may be 1.8, 1.9, 2.0, etc., and assuming the target ratio is θ, the height is 1.9 × θ if the predetermined multiple is 1.9.
The ordinate of the geometric center point of the overlap region is the ordinate of the geometric center point of the overlap region in the image coordinate system or the camera coordinate system or the world coordinate system. Thus, through the overlapping area, the target detection area containing the moving target can be determined.
Of course, in other embodiments, a circular region/an elliptical region/a square region centered on the geometric center point of the overlap region may be determined as the target detection region, and the shape of the target detection region is not limited, and only the entire moving target may be included.
Specifically, the vehicle detection in the target detection area includes:
zooming the target detection area to a preset size;
determining each image to be detected from the zoomed target detection area;
inputting the image to be detected into a target convolutional neural network aiming at each image to be detected to obtain a classification result of whether a vehicle exists in the image to be detected or not; the target convolutional neural network is obtained by pre-training a preset initial convolutional neural network by using a first training sample set;
and combining all the images to be detected with the vehicle in the images to be detected according to the classification result to obtain the vehicle area in the target detection area.
And if all the classification results are that no vehicle exists in the image to be detected, no vehicle region is detected in the target detection region.
The predetermined size may be predetermined, and may be, for example, 160 × 160, 170 × 170, 180 × 180, and the like. The size of the image to be detected may also be fixed, for example, may be 40 × 40, 50 × 50, 60 × 60, and so on.
Illustratively, the target detection area is scaled to 160 × 160, each image to be detected is determined from the scaled target detection area, the size of each image to be detected is 40 × 40, a detection window of 40 × 40 is set in the target detection area, the moving step of the detection window is 10 pixels, and the image obtained by moving the detection window once is taken as one image to be detected.
The first training sample set comprises a positive sample set and a negative sample set, wherein the positive sample set comprises images containing vehicles, and the negative sample set comprises images without vehicles, for example, the positive sample set comprises images of vehicles of various manufacturers, colors, vehicle types, multiple angles, multiple rays and multiple backgrounds; the negative sample set comprises images of pedestrians, trees, railings and the like. In order to expand the number of sample sets and improve the detection precision of the target convolutional neural network, the positive sample set and the negative sample set are both composed of collected samples and reconstructed samples, the reconstructed samples are obtained by changing the brightness of the collected samples, and the image sizes of the collected samples and the reconstructed samples are normalized to the preset size.
The preset initial convolutional neural network can be divided into the following parts: an input layer, a C1 convolutional layer, an S2 pooling layer, a C3 convolutional layer, an S4 pooling layer, a C5 convolutional layer, an S6 pooling layer, an L7 full link layer, and an output layer, each pooling layer connected to one convolutional layer of the previous layer. The C1 convolutional layer may use 6 convolution kernels of 3 × 3, and the layer training parameters may be 60; s2, performing average pooling operation on the pooling layer, wherein the pooling receptive field is 2, and the step diameter is 2; the C3 convolutional layer can adopt 12 4 × 4 convolutional kernels, each convolutional kernel is connected with the 6 feature maps of the previous layer, so as to generate feature maps of 12 convolutional layers in total, and the layer has 204 training parameters; the S4 pooling layers are similar to the S2 pooling layers, each pooling layer is connected to one signature of C3, resulting in 12 signatures, employing non-overlapping receptive fields, and an output signature size of 8 × 8; each convolution kernel of the C5 convolution layer is connected with all feature maps of the S4 pooling layer, 6 different features are extracted, and 60 training parameters are total; s6, connecting each pooling layer in the pooling layers with a C5 feature map to generate 6 feature maps in total, adopting non-overlapping receptive fields, and outputting the feature maps with the size of 3 x 3; l7 fully connecting the layers, arranging the characteristics of S6 into a column vector with the size of 1 × 54; the output layer multiplies the column vector of L7 by the weight matrix, and adds the offset to generate a1 × 2 column vector by the Sigmoid activation function. And judging whether the input gray scale map is a vehicle or a non-vehicle by judging the classification value of the input image in the column vector. For example, if the classification value is 1, it is determined that the input image includes a car; if the classification value is 0, it is determined that the input image does not include a vehicle.
After factors such as the resolution of the samples, the retention of the vehicle features, the size of the feature map and the like are integrated, each sample in the first training sample set can be preprocessed into a 40 × 40 gray scale map, each preprocessed sample is used as an input image, a back propagation algorithm or a gradient descent algorithm is adopted to train a preset initial convolutional neural network, the learning rate of the network can be set to be 2 or 1.5 or 1.6 or 1.7 and the like, each 10 or 20 or 30 samples are trained as a batch, after multiple iterations, the network can tend to be stable, and the network which tends to be stable is used as a target convolutional neural network.
The pretreatment process for the sample may be: cutting and scaling the sample to 160 × 160, converting the sample from an RGB color model to an HSV color model, changing the mean value of lightness (V) in the HSV color model to obtain an image with changed lightness, converting the image from the HSV color model to the RGB color model, performing graying treatment, and normalizing the image after the graying treatment to 40 × 40 to obtain a preprocessed 40 × 40 grayscale image.
The license plate recognition of the vehicle area is carried out to obtain the license plate number of the vehicle in the vehicle area, and the license plate recognition method comprises the following steps:
carrying out color screening on the vehicle area to obtain a first license plate area containing a preset license plate color;
performing morphological filtering on the first license plate area, and performing character positioning on the first license plate area subjected to the morphological filtering to obtain a character area;
inputting the character area into a target SVM network to obtain a judgment result, wherein the judgment result is that the character area belongs to a license plate area or does not belong to the license plate area; the target SVM network is obtained by pre-training a preset initial SVM network by using a second training sample set;
and performing character recognition on the character area with the judgment result of belonging to the license plate area, and taking the recognized character as the license plate number of the vehicle in the vehicle area.
The preset license plate color can be preset according to all colors that may appear in the license plate in real life, for example, the preset license plate color may include a plurality of colors such as yellow, green, black, blue, and the like. If the color of a certain area in the vehicle area belongs to a certain color in the preset license plate colors, the area can be considered to contain the preset license plate colors, and then the area can be used as a first license plate area.
Through carrying out the morphology filtering to first license plate region, can eliminate the noise interference in the first license plate region for the edge in first license plate region is clearer and level and smooth, and then can be favorable to carrying out the character location, makes the character region of gained more accurate.
The second training sample set comprises a plurality of license plate images, a preset initial SVM (Support Vector Machine) network is trained by the second training sample set, and the target SVM network can be obtained after the preset initial SVM network is trained to be converged. And obtaining a judgment result of whether the image is a license plate image or not by using a certain image target SVM network. When the character area belongs to the license plate area, the character area can be input into the target SVM network, and then a judgment result can be obtained.
If the character area belongs to the license plate area, character recognition can be performed on the character area by using an ANN (Artificial Neural Network) model or other character recognition algorithms to obtain the license plate number.
In addition, color screening is performed on the vehicle areas, a plurality of first license plate areas may be obtained, if a character area obtained from a certain first license plate area does not belong to a license plate area, a next first license plate area can be obtained, and the step of performing morphological filtering on the first license plate area is continuously performed.
Therefore, by applying the technical scheme provided by the embodiment of the invention, under the condition that the size of the moving target area is within the size range of the preset vehicle area and the area of the overlapping area is larger than the preset threshold value, the target detection area is determined based on the overlapping area, and then the vehicle detection is carried out in the target detection area, so that the moving target which is obviously not a vehicle is eliminated, unnecessary detection processes are reduced, the detection accuracy and the detection efficiency are improved, and only after the vehicle area is detected, the license plate recognition is carried out on the vehicle area, so that the detection of non-vehicles is further avoided, the detection accuracy is improved, the image processing area for license plate recognition is reduced, and the recognition efficiency is improved.
In order to facilitate uniform monitoring and management of occupied vehicles, in one implementation manner, after obtaining the license plate number, the method further includes:
and sending the original image, the license plate number, the position of the bus and the acquisition time to a management platform.
The management platform can be used for carrying out unified monitoring management on the occupied vehicles. The acquisition time may be the time when the camera acquires the original image.
In addition, the original image, the license plate number, the position of the bus and the acquisition time can be stored, and the original image, the license plate number, the position of the bus and the acquisition time can be stored in a local or other storage servers.
In addition, in order to improve processing efficiency, the size of the moving target region is not within a preset vehicle region size range or the area of a non-detected vehicle region or an overlapping region in the target detection region is not greater than a preset threshold, the method further comprising:
and acquiring a next image of the original image, taking the next image as the original image, and continuously determining a bus lane area and a moving target area in the original image.
The size of the moving target area is not in the preset vehicle area size range or the area of the undetected vehicle area or the overlapping area in the target detection area is not larger than the preset threshold value, which indicates that the moving target in the moving target area is not a vehicle, and then the next original image can be directly detected, so that the detection efficiency is improved.
Corresponding to the method embodiment, the embodiment of the invention also provides a device for detecting the occupation of the bus lane.
Referring to fig. 2, fig. 2 is a schematic structural diagram of a bus lane occupation detection device provided in an embodiment of the present invention, a camera is installed at the front of a bus, and the device includes:
a first obtaining module 201, configured to obtain a location of the bus; judging whether the position of the bus is in a preset bus lane area or not;
the determining module 202 is used for acquiring an original image acquired by the camera when the position of the bus is in the special lane area of the bus; determining a bus lane line area and a moving target area in the original image;
the judging module 203 is configured to judge whether an area of an overlapping region of the bus lane line region and the moving target region is larger than a preset threshold value under the condition that the size of the moving target region is within a preset vehicle region size range;
the detecting module 204 is configured to, when the determination result of the determining module 203 is yes, determine a target detection area based on the overlapping area, perform vehicle detection in the target detection area, and perform license plate recognition on the vehicle area after the vehicle area is detected, so as to obtain a license plate number of a vehicle in the vehicle area.
Therefore, by applying the embodiment of the invention, under the condition that the size of the moving target area is within the size range of the preset vehicle area and the area of the overlapped area is larger than the preset threshold value, the target detection area is determined based on the overlapped area, and then the vehicle detection is carried out in the target detection area, the moving target which is obviously not a vehicle is eliminated, unnecessary detection processes are reduced, the detection accuracy and the detection efficiency are improved, and only after the vehicle area is detected, the license plate recognition is carried out on the vehicle area, so that the detection of the non-vehicle is further avoided, the detection accuracy is improved, the image processing area for license plate recognition is reduced, and the recognition efficiency is improved.
Optionally, the apparatus further includes a sending module, configured to:
and after the license plate number is obtained, sending the original image, the license plate number, the position of the bus and the acquisition time to a management platform.
Optionally, the apparatus further includes a second obtaining module, configured to:
and if the size of the moving target area is not in the size range of a preset vehicle area or the area of an undetected vehicle area or an overlapped area in the target detection area is not larger than a preset threshold value, acquiring a next image of the original image, taking the next image as the original image, and continuously determining the bus lane area and the moving target area in the original image.
Optionally, the determining module 202 determines the bus lane area in the original image, specifically:
identifying a bus lane line in the original image;
and expanding the areas on two sides of the bus lane according to a preset angle and a preset width to obtain the bus lane area.
Optionally, the determining module 202 identifies a bus lane line in the original image, specifically:
extracting a region of interest in the original image;
establishing a sparse grid image, and taking an intersection of the sparse grid image and the region of interest to obtain an intersection image; performing color filtering on the intersection image to obtain a feature map only retaining the colors of preset bus lane lines;
extracting the bus lane edge feature points in the feature map by using an edge detection algorithm;
performing image expansion processing on the characteristic points of the edges of the bus lane lines to obtain the range area of the bus lane lines;
and carrying out straight line detection on the range area where the bus lane line is located, and taking the detected straight line as the bus lane line in the original image.
Optionally, the detection module 204 determines the target detection area based on the overlap area, specifically:
and determining a rectangular area taking the geometric center point of the overlapping area as a center as a target detection area, wherein the width of the rectangular area is a preset detection width, the height of the rectangular area is a product of a preset multiple and a target ratio, and the target ratio is a ratio of a longitudinal coordinate of the geometric center point of the overlapping area to a longitudinal total pixel of the area of interest.
Optionally, the detection module 204 performs vehicle detection in the target detection area, specifically:
zooming the target detection area to a preset size;
determining each image to be detected from the zoomed target detection area;
inputting the image to be detected into a target convolutional neural network aiming at each image to be detected to obtain a classification result of whether a vehicle exists in the image to be detected or not; the target convolutional neural network is obtained by pre-training a preset initial convolutional neural network by using a first training sample set;
and combining all the images to be detected with the vehicle in the images to be detected according to the classification result to obtain the vehicle area in the target detection area.
Optionally, the first training sample set includes a positive sample set including images containing vehicles and a negative sample set including images containing no vehicles.
Optionally, the detecting module 204 performs license plate recognition on the vehicle area to obtain a license plate number of a vehicle in the vehicle area, specifically:
carrying out color screening on the vehicle area to obtain a first license plate area containing a preset license plate color;
performing morphological filtering on the first license plate area, and performing character positioning on the first license plate area subjected to the morphological filtering to obtain a character area;
inputting the character area to a target SVM network to obtain a judgment result, wherein the judgment result is that the character area belongs to a license plate area or does not belong to the license plate area; the target SVM network is obtained by pre-training a preset initial SVM network by using a second training sample set;
and performing character recognition on the character area with the judgment result of belonging to the license plate area, and taking the recognized character as the license plate number of the vehicle in the vehicle area.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. A method for detecting the lane occupation of a bus is characterized in that a camera is installed at the front part of the bus, and the method comprises the following steps:
obtaining the position of the bus; judging whether the position of the bus is in a preset bus lane area or not;
if the position of the bus is in the bus lane area, acquiring an original image acquired by the camera; determining a bus lane line area and a moving target area in the original image;
under the condition that the size of the moving target area is within the size range of a preset vehicle area, judging whether the area of an overlapping area of the bus lane area and the moving target area is larger than a preset threshold value or not;
and if the area of the overlapping area is larger than a preset threshold value, determining a target detection area based on the overlapping area, detecting vehicles in the target detection area, and after detecting the vehicle area, identifying license plates of the vehicle area to obtain license plate numbers of the vehicles in the vehicle area.
2. The method of claim 1, wherein after obtaining the license plate number, the method further comprises:
and sending the original image, the license plate number, the position of the bus and the acquisition time to a management platform.
3. The method of claim 1, wherein the size of the moving target area is not within a preset vehicle area size range or the area of no detected vehicle area or overlapping area in the target detection area is no greater than a preset threshold, the method further comprising:
and acquiring a next image of the original image, taking the next image as the original image, and continuously determining a bus lane area and a moving target area in the original image.
4. The method of claim 1, wherein determining a bus lane area in the original image comprises:
identifying a bus lane line in the original image;
and expanding the areas on two sides of the bus lane according to a preset angle and a preset width to obtain the bus lane area.
5. The method of claim 4, wherein identifying a bus lane in the original image comprises:
extracting a region of interest in the original image;
establishing a sparse grid image, and taking an intersection of the sparse grid image and the region of interest to obtain an intersection image; performing color filtering on the intersection image to obtain a feature map only retaining the colors of preset bus lane lines;
extracting the bus lane edge feature points in the feature map by using an edge detection algorithm;
performing image expansion processing on the characteristic points of the edges of the bus lane lines to obtain the range area of the bus lane lines;
and carrying out straight line detection on the range area where the bus lane line is located, and taking the detected straight line as the bus lane line in the original image.
6. The method of claim 1, wherein performing vehicle detection in the target detection area comprises:
zooming the target detection area to a preset size;
determining each image to be detected from the zoomed target detection area;
inputting the image to be detected into a target convolutional neural network aiming at each image to be detected to obtain a classification result of whether a vehicle exists in the image to be detected or not; the target convolutional neural network is obtained by pre-training a preset initial convolutional neural network by using a first training sample set;
and combining all the images to be detected with the vehicle in the images to be detected according to the classification result to obtain the vehicle area in the target detection area.
7. The method of claim 6, wherein the first training sample set comprises a positive sample set comprising images containing vehicles and a negative sample set comprising images without vehicles.
8. The method of claim 1, wherein identifying the license plate of the vehicle region to obtain the license plate number of the vehicle in the vehicle region comprises:
carrying out color screening on the vehicle area to obtain a first license plate area containing a preset license plate color;
performing morphological filtering on the first license plate area, and performing character positioning on the first license plate area subjected to the morphological filtering to obtain a character area;
inputting the character area to a target SVM network to obtain a judgment result, wherein the judgment result is that the character area belongs to a license plate area or does not belong to the license plate area; the target SVM network is obtained by pre-training a preset initial SVM network by using a second training sample set;
and performing character recognition on the character area with the judgment result of belonging to the license plate area, and taking the recognized character as the license plate number of the vehicle in the vehicle area.
9. The utility model provides a bus lane occupies detection device which characterized in that, the forward-mounted of bus has the camera, the device includes:
the first obtaining module is used for obtaining the position of the bus; judging whether the position of the bus is in a preset bus lane area or not;
the determining module is used for acquiring an original image acquired by the camera when the position of the bus is in the special lane area of the bus; determining a bus lane line area and a moving target area in the original image;
the judging module is used for judging whether the area of an overlapping area of the bus lane area and the moving target area is larger than a preset threshold value or not under the condition that the size of the moving target area is within a preset vehicle area size range;
and the detection module is used for determining a target detection area based on the overlapping area when the judgment result of the judgment module is yes, detecting the vehicle in the target detection area, and after the vehicle area is detected, identifying the license plate of the vehicle in the vehicle area to obtain the license plate number of the vehicle in the vehicle area.
CN201811452012.2A 2018-11-30 2018-11-30 Method and device for detecting occupation of bus lane Active CN109711264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811452012.2A CN109711264B (en) 2018-11-30 2018-11-30 Method and device for detecting occupation of bus lane

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811452012.2A CN109711264B (en) 2018-11-30 2018-11-30 Method and device for detecting occupation of bus lane

Publications (2)

Publication Number Publication Date
CN109711264A CN109711264A (en) 2019-05-03
CN109711264B true CN109711264B (en) 2020-12-18

Family

ID=66254441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811452012.2A Active CN109711264B (en) 2018-11-30 2018-11-30 Method and device for detecting occupation of bus lane

Country Status (1)

Country Link
CN (1) CN109711264B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798658A (en) * 2019-11-08 2020-10-20 方勤 Traffic lane passing efficiency detection platform
CN110991201B (en) * 2019-11-25 2023-04-18 浙江大华技术股份有限公司 Bar code detection method and related device
CN113128264B (en) * 2019-12-30 2023-07-07 杭州海康汽车技术有限公司 Vehicle region determining method and device and electronic equipment
CN111428688B (en) * 2020-04-16 2022-07-26 成都旸谷信息技术有限公司 Intelligent vehicle driving lane identification method and system based on mask matrix
CN111860219B (en) * 2020-06-30 2024-01-05 杭州科度科技有限公司 High-speed channel occupation judging method and device and electronic equipment
CN112633062A (en) * 2020-11-18 2021-04-09 合肥湛达智能科技有限公司 Deep learning bus lane occupation detection method based on embedded terminal
CN112733846B (en) * 2020-12-31 2024-01-12 精英数智科技股份有限公司 License plate detection method, device and system
CN112784817B (en) * 2021-02-26 2023-01-31 上海商汤科技开发有限公司 Method, device and equipment for detecting lane where vehicle is located and storage medium
CN112949465A (en) * 2021-02-26 2021-06-11 上海商汤智能科技有限公司 Vehicle continuous lane change recognition method, device, equipment and storage medium
CN113076852A (en) * 2021-03-30 2021-07-06 华录智达科技股份有限公司 Vehicle-mounted snapshot processing system occupying bus lane based on 5G communication
CN113191272A (en) * 2021-04-30 2021-07-30 杭州品茗安控信息技术股份有限公司 Engineering image identification method, identification system and related device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9562778B2 (en) * 2011-06-03 2017-02-07 Robert Bosch Gmbh Combined radar and GPS localization system
CN102394013A (en) * 2011-06-28 2012-03-28 徐培龙 Real-time monitoring system and method for lane-occupying running of freight vehicles on express highway
US8698896B2 (en) * 2012-08-06 2014-04-15 Cloudparc, Inc. Controlling vehicle use of parking spaces and parking violations within the parking spaces using multiple cameras
CN103996031A (en) * 2014-05-23 2014-08-20 奇瑞汽车股份有限公司 Self adaptive threshold segmentation lane line detection system and method
CN106887004A (en) * 2017-02-24 2017-06-23 电子科技大学 A kind of method for detecting lane lines based on Block- matching

Also Published As

Publication number Publication date
CN109711264A (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN109711264B (en) Method and device for detecting occupation of bus lane
CN107545239B (en) Fake plate detection method based on license plate recognition and vehicle characteristic matching
CN110619750B (en) Intelligent aerial photography identification method and system for illegal parking vehicle
CN103824066B (en) A kind of licence plate recognition method based on video flowing
KR101848019B1 (en) Method and Apparatus for Detecting Vehicle License Plate by Detecting Vehicle Area
CN108268867B (en) License plate positioning method and device
CN109635656A (en) Vehicle attribute recognition methods, device, equipment and medium neural network based
JP5223675B2 (en) Vehicle detection device, vehicle detection method, and vehicle detection program
US20210192227A1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN107665327B (en) Lane line detection method and device
CN111382704A (en) Vehicle line-pressing violation judgment method and device based on deep learning and storage medium
Prates et al. Brazilian license plate detection using histogram of oriented gradients and sliding windows
CN107464245B (en) Image structure edge positioning method and device
CN202134079U (en) Unmanned vehicle lane marker line identification and alarm device
CN112613344B (en) Vehicle track occupation detection method, device, computer equipment and readable storage medium
CN111091023A (en) Vehicle detection method and device and electronic equipment
CN104915642A (en) Method and apparatus for measurement of distance to vehicle ahead
Mammeri et al. North-American speed limit sign detection and recognition for smart cars
CN109977941A (en) Licence plate recognition method and device
CN112115800A (en) Vehicle combination recognition system and method based on deep learning target detection
Chiang et al. Road speed sign recognition using edge-voting principle and learning vector quantization network
Karungaru et al. Road traffic signs recognition using genetic algorithms and neural networks
Nguwi et al. Number plate recognition in noisy image
CN111709377B (en) Feature extraction method, target re-identification method and device and electronic equipment
CN104966064A (en) Pedestrian ahead distance measurement method based on visual sense

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant