CN117351439A - Dynamic monitoring management system for intelligent expressway overrun vehicle - Google Patents

Dynamic monitoring management system for intelligent expressway overrun vehicle Download PDF

Info

Publication number
CN117351439A
CN117351439A CN202311656951.XA CN202311656951A CN117351439A CN 117351439 A CN117351439 A CN 117351439A CN 202311656951 A CN202311656951 A CN 202311656951A CN 117351439 A CN117351439 A CN 117351439A
Authority
CN
China
Prior art keywords
vehicle
license plate
image
edge
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311656951.XA
Other languages
Chinese (zh)
Other versions
CN117351439B (en
Inventor
许洪涛
王慧
杜鹏健
侯胜淋
狄强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Boanits Technology Co ltd
Original Assignee
Shandong Boanits Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Boanits Technology Co ltd filed Critical Shandong Boanits Technology Co ltd
Priority to CN202311656951.XA priority Critical patent/CN117351439B/en
Publication of CN117351439A publication Critical patent/CN117351439A/en
Application granted granted Critical
Publication of CN117351439B publication Critical patent/CN117351439B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19187Graphical models, e.g. Bayesian networks or Markov models
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a dynamic monitoring and management system for an overrun vehicle on an intelligent expressway, which relates to the technical field of intelligent expressways, and comprises the following components: an image acquisition section, a vehicle size detection section, a license plate recognition section, and a monitoring management section; the image acquisition part is used for acquiring an image of a vehicle running on a target highway; the vehicle size detection part is used for identifying the vehicle body image so as to judge whether the vehicle corresponding to the vehicle body image exceeds the limit, and the license plate identification part is used for carrying out license plate identification according to the license plate image under the condition that the vehicle size detection part judges that the vehicle exceeds the limit; the monitoring management part is used for acquiring the corresponding contact information of the vehicle according to the license plate number, automatically sending out-of-limit prompting information according to the contact information so as to prompt the vehicle to out-of-limit and adding out-of-limit records of the vehicle. The invention improves the recognition accuracy of the overrun vehicle, and simultaneously carries out real-time dynamic monitoring of the overrun vehicle through accurate license plate recognition.

Description

Dynamic monitoring management system for intelligent expressway overrun vehicle
Technical Field
The invention relates to the technical field of intelligent highways, in particular to a dynamic monitoring and management system for an overrun vehicle on an intelligent expressway.
Background
In today's society, highway systems are one of the key components of urban and national traffic networks. As the number of vehicles continues to increase, more and more traffic problems are presented on highways, one of which is the management of overrun vehicles. Overrun vehicles are vehicles whose size or weight exceeds legal limits, which may cause serious damage to roads and bridges, cause traffic jams, and even endanger traffic safety. Therefore, monitoring and management of overrun vehicles is critical.
In conventional overrun vehicle monitoring methods, manual inspection, fixed cameras, and manual identification are often relied upon to detect overrun vehicles. However, this approach suffers from a series of problems:
the manual inspection efficiency is low: the manual inspection requires a large amount of manpower resources, has high cost and is easy to be leaked.
Fixed camera limitations: the fixed camera can only monitor specific position, can not cover whole highway network in real time, leads to the monitoring dead angle.
Manual identification is unstable: the manual identification has the problems of subjectivity and fatigue, is easy to make mistakes, and has low identification accuracy.
The real-time performance is poor: the traditional method can not provide real-time monitoring and management, and is difficult to quickly respond to the occurrence of an overrun vehicle.
Disclosure of Invention
The invention aims to provide a dynamic monitoring management system for an overrun vehicle on an intelligent expressway, which improves the recognition accuracy of the overrun vehicle and simultaneously monitors the overrun vehicle dynamically in real time through accurate license plate recognition.
In order to solve the technical problems, the invention provides a dynamic monitoring and management system for an intelligent highway overrun vehicle, which comprises: an image acquisition section, a vehicle size detection section, a license plate recognition section, and a monitoring management section; the image acquisition section for acquiring an image of a vehicle traveling on a target highway, comprising: license plate images and vehicle body images; the vehicle size detection part is used for identifying the vehicle body image so as to judge whether the vehicle corresponding to the vehicle body image exceeds the limit, and specifically comprises the following steps: extracting features of a vehicle body image through an improved Gabor filter to obtain vehicle body image features, detecting the vehicle body image features by using a Marr-Hildrth edge detector to obtain edge features, detecting zero crossing points of the edge features to position edges in the vehicle body image, forming continuous edge line segments by connecting adjacent zero crossing points to obtain a vehicle contour, calculating size parameters of the vehicle according to the extracted vehicle contour, and judging whether the vehicle is out of limit or not; the license plate recognition part is used for carrying out license plate recognition according to the license plate image under the condition that the vehicle size detection part judges that the vehicle exceeds the limit, and specifically comprises the following steps: carrying out fuzzy processing on the license plate image by using a Gaussian filter to obtain a fuzzy image, extracting license plate image features from the fuzzy image, positioning license plate characters by using a Markov random field according to the license plate image features, learning a dictionary of the license plate characters by using sparse dictionary learning, representing the license plate characters as sparse linear combination, dividing the license plate characters by using a character dividing algorithm, recognizing the characters by using a dictionary obtained by dictionary learning for each divided character, and splicing the recognized characters into license plate numbers according to the sequence; the monitoring management part is used for acquiring the corresponding contact information of the vehicle according to the license plate number, automatically sending out-of-limit prompting information according to the contact information so as to prompt the vehicle to out-of-limit and adding out-of-limit records of the vehicle.
Further, the image acquisition part comprises two image acquisition devices which are independently operated, namely a first image acquisition device and a second image acquisition device; the first image acquisition device and the second image device acquire license plate images and vehicle body images of vehicles running on the target expressway at the same time according to the same frequency respectively, and the license plate images and the vehicle body images acquired at the same time are associated by using unique identifiers; in the case where the vehicle size detecting section judges that the vehicle is out of limit based on the vehicle body image, the license plate recognizing section obtains a license plate image associated with the vehicle body image based on the unique identifier, and performs license plate recognition.
Further, the improved Gabor filter is expressed using the following formula:
the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>Is an improved Gabor filter at coordinate point of the image +.>Response at; />And->Respectively pixels in the image +.>Axis coordinate sum->The axis coordinates, but subjected to a rotation transformation consisting of the direction parameters of the filter +.>Controlling; />Is the frequency parameter of the filter; />And->Gabor filters are in +.>Standard deviation parameter of axis and Gabor filter in +.>Standard deviation parameters of the axes, respectively controlling the filter in +. >Axis direction and +.>Spatial resolution in the axial direction; />Is the phase offset parameter of the filter.
Further, let the vehicle body image beVehicle body image characteristics obtained by the vehicle size detecting sectionThe expression is used as follows:
wherein,is convolution operation; the process of rotation transformation is expressed using the following formula:
further, the vehicle size detecting section detects the vehicle body image feature using a Marr-Hildreteh edge detector to obtainThe process to edge feature includes: calculating a body image feature using the following formulaIs a gradient of (2):
wherein,and->Respectively indicate->At->And->Gradient in direction; the edge strength is calculated using the following formula:
wherein,representing edge strength; and calculating to obtain edge characteristics by using the following formula:
wherein,is an edge feature.
Further, the following formula is used to locate edges in the vehicle body image by detecting zero-crossing points of edge features:
the following formula is used to form a continuous edge line segment by connecting adjacent zero crossing points, thereby obtaining the vehicle contour:
a value of 1 representing the presence of an edge segment and a value of 0 representing the absence of an edge segment; / >An edge value of 1 or-1 indicates that there is a zero crossing at that location, and a pixel value of 0 indicates that there is no zero crossing; />And->Is indicated at the current pixel position +.>Is the last pixel position of (a)And one pixel position on the left +.>Cross values at.
Further, the license plate recognition part uses a Gaussian filter to carry out fuzzy processing on the license plate image, and the obtained fuzzy image isThe method comprises the steps of carrying out a first treatment on the surface of the Setting a plurality of different gradient directions +.>Extracting from the blurred image using the following formulaImage characteristics of the car taking plate:
wherein,is->License plate image feature in gradient direction, < >>Is a dirac delta function; />Absolute value representing gradient direction, +.>The upper limit value of (2) is equal to the number of license plate characters +.>The method comprises the steps of carrying out a first treatment on the surface of the Positioning license plate characters according to license plate image features by using a Markov random field through the following formula:
wherein,representing proportional operation, ++>The position variable is the position variable of license plate characters; />Indicating the gradient direction as +.>The energy function of the following license plate characters is defined as:
wherein,is the second order Frobenius norm.
Further, the license plate characters are represented as sparse linear combinations by learning a dictionary of license plate characters using sparse dictionary learning by the following formula
Wherein:
Wherein,is dictionary->Is a sparse coefficient>Is a regularization parameter, +.>For sparse components, each license plate character corresponds to one sparse component +.>
Further, the license plate character is segmented using a character segmentation algorithm by the following formula:
wherein,is->The segmentation points of the individual characters.
Further, for each of the divided characters, the character is recognized using a dictionary learned by dictionary by the following formula:
wherein,for sparse component->Is used for identifying sparse coefficients; by comparison->And finishing the recognition of each character by comparing with a pre-established comparison table of the sparse coefficient and the characters.
The intelligent highway overrun vehicle dynamic monitoring management system has the following beneficial effects: the invention can capture the vehicle image of the target highway, including license plate image and vehicle body image, by using the high-performance camera equipment. The vehicle size detecting section employs an improved Gabor filter and an edge detector, and is capable of recognizing the vehicle contour and the size parameter with high accuracy. The technology has the advantages that the accurate detection capability of the overrun vehicle is improved, the false alarm rate is reduced, the waste of traffic management resources is reduced, and road facilities are protected from being damaged by the overrun vehicle. The license plate image is processed and analyzed, so that license plate characters can be rapidly identified, and a license plate number can be generated. The technology has the beneficial effects that the technology provides the capability of acquiring the vehicle information in real time for the supervision department, including the vehicle owner contact information. This enables the authorities to respond quickly to overrun vehicles, sending alerts and notifications, helping to reduce the risk of traffic accidents caused by overrun vehicles, maintaining road safety. The modified Gabor filter is a key component of the vehicle size detection section. The method controls the filtering of the image through variable parameters, has rotation invariance and can be suitable for vehicles in different directions. The technology has the beneficial effects that the extraction efficiency of the vehicle body image characteristics is improved, the vehicle contour is accurately positioned, and a reliable basis is provided for the subsequent vehicle size detection. Zero-crossing point detection and continuous edge line segment generation techniques facilitate further refinement of vehicle contour extraction. By detecting the zero crossing points of the edge features, the system is able to determine the starting and ending positions of the vehicle edge and then connect these points to form a continuous edge line segment. The beneficial effect of this technique is that it improves the integrity and consistency of the vehicle profile, providing more reliable data for vehicle size detection.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic system structure diagram of a dynamic monitoring and management system for an intelligent highway overrun vehicle according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1: referring to fig. 1, an intelligent highway overrun vehicle dynamic monitoring management system, the system comprising: an image acquisition section, a vehicle size detection section, a license plate recognition section, and a monitoring management section; the image acquisition section for acquiring an image of a vehicle traveling on a target highway, comprising: license plate images and vehicle body images; the vehicle size detection part is used for identifying the vehicle body image so as to judge whether the vehicle corresponding to the vehicle body image exceeds the limit, and specifically comprises the following steps: extracting features of a vehicle body image through an improved Gabor filter to obtain vehicle body image features, detecting the vehicle body image features by using a Marr-Hildrth edge detector to obtain edge features, detecting zero crossing points of the edge features to position edges in the vehicle body image, forming continuous edge line segments by connecting adjacent zero crossing points to obtain a vehicle contour, calculating size parameters of the vehicle according to the extracted vehicle contour, and judging whether the vehicle is out of limit or not; the license plate recognition part is used for carrying out license plate recognition according to the license plate image under the condition that the vehicle size detection part judges that the vehicle exceeds the limit, and specifically comprises the following steps: carrying out fuzzy processing on the license plate image by using a Gaussian filter to obtain a fuzzy image, extracting license plate image features from the fuzzy image, positioning license plate characters by using a Markov random field according to the license plate image features, learning a dictionary of the license plate characters by using sparse dictionary learning, representing the license plate characters as sparse linear combination, dividing the license plate characters by using a character dividing algorithm, recognizing the characters by using a dictionary obtained by dictionary learning for each divided character, and splicing the recognized characters into license plate numbers according to the sequence; the monitoring management part is used for acquiring the corresponding contact information of the vehicle according to the license plate number, automatically sending out-of-limit prompting information according to the contact information so as to prompt the vehicle to out-of-limit and adding out-of-limit records of the vehicle.
Specifically, the vehicle size detecting section first receives the vehicle body image from the image acquiring section. The vehicle body image is subjected to feature extraction through an improved Gabor filter. Gabor filters are a tool for analyzing image texture and edge features. It can detect textures and edges in different directions and frequencies. The modified Gabor filter may be adjusted for specific features of the body image to highlight information of the vehicle contour. A feature image is obtained by a Gabor filter, wherein the feature image comprises textures and edge features with different directions and frequencies in a car body image. This feature image is more advantageous for subsequent processing steps, as it highlights information about the vehicle contour. The feature image is then processed through a Marr-Hildreth edge detector to further enhance the edge features in the image. The Marr-Hildreth edge detector is an edge detection method commonly used in image processing to find edges by detecting changes in the gray values of pixels in an image. After passing the Marr-Hildreth edge detector, an edge feature image is obtained, which contains information about the edge. Next, the system will detect zero crossings in this image, which marks the transition of the edge from background to foreground or vice versa. By connecting adjacent zero crossing points, the system can form continuous edge line segments that constitute an approximate representation of the vehicle contour. The connection of these edge line segments may help determine the general shape of the vehicle.
The main function of the vehicle size detecting section is to detect whether the vehicle is overrun. By analyzing the vehicle body image, extracting vehicle contour information and calculating the size parameter, the system can accurately judge whether the length, width and height of the vehicle are within legal ranges. If the size of the vehicle exceeds the specified limit, the system will identify the vehicle as an overrun vehicle. Detecting overrun vehicles is critical to traffic safety. Overrun vehicles may cause accidents on highways, damage bridges, tunnels or other road bed facilities, endangering the life and property safety of other road users. By detecting and identifying overrun vehicles in time, necessary measures such as stopping the vehicle or guiding the vehicle around can be taken to ensure traffic safety. There are typically specified vehicle size restrictions on highways to ensure the integrity and safety of the infrastructure. The vehicle size detection section helps to ensure whether the vehicle complies with these regulations, to reduce damage to the road infrastructure, and to ensure that the road usage complies with the regulations.
The license plate recognition section first receives the license plate image from the image acquisition section. In order to reduce noise in the image and highlight features of license plate characters, license plate images are typically blurred by a gaussian filter. This helps smooth the image and improves the stability of subsequent processing steps. And extracting the characteristics of the license plate image from the blurred image. These features may include color, texture, shape, and edge information. Different license plate recognition systems may employ different feature extraction methods. By using a Markov random field or other object localization method, the system is able to localize the approximate location of license plate characters in a license plate image. Markov random fields are a statistical tool for modeling the position of objects in images that can help locate characters based on a priori knowledge. To recognize characters, the system learns the dictionary of characters using a sparse dictionary learning method. Dictionary learning is a machine learning technique that can learn the representation of characters from a large amount of sample data. By learning the dictionary of characters, the system is better able to understand the character's characteristics. Based on the character positioning, the system can perform character segmentation to divide the characters in the license plate image into separate parts. This is to further process each character.
When the vehicle size detecting section judges that the vehicle is out of limit, the license plate recognizing section is configured to acquire a license plate number of the vehicle. This license plate number may be used to uniquely identify and track an overrun vehicle. By identifying the license plate number, the system can establish the record of the vehicle information and ensure that the illegal behavior of the overrun vehicle is recorded and processed. The license plate recognition portion may associate the size information of the vehicle with the license plate number. This helps to correlate the size information of the vehicle with the license plate number in the subsequent monitoring and management process so that the manager can more easily understand the detailed information of the overrun vehicle. Once the license plate number of the overrun vehicle is identified, the system may automatically send an overrun alert message to the vehicle owner or related authorities. This helps to timely inform the vehicle of an overrun condition, taking necessary actions such as requesting the vehicle to stop or change the course of travel to ensure traffic safety and regulatory compliance.
Example 2: the image acquisition part comprises two image acquisition devices which are independently operated, namely a first image acquisition device and a second image acquisition device; the first image acquisition device and the second image device acquire license plate images and vehicle body images of vehicles running on the target expressway at the same time according to the same frequency respectively, and the license plate images and the vehicle body images acquired at the same time are associated by using unique identifiers; in the case where the vehicle size detecting section judges that the vehicle is out of limit based on the vehicle body image, the license plate recognizing section obtains a license plate image associated with the vehicle body image based on the unique identifier, and performs license plate recognition.
In particular, the two image capturing devices may be cameras or other image capturing devices mounted on the highway for capturing images of the traveling vehicle. They operate independently but at the same frequency. The first image acquisition device and the second image acquisition device simultaneously capture a license plate image and a body image of the vehicle at each moment. This means that at the same point in time, the system will acquire two images, one being the license plate image and the other being the body image. To ensure correspondence between license plate images and body images, the system assigns them a unique identifier. In this way, the license plate image and the body image captured at each moment can be associated by the identifier to ensure that they belong to the same vehicle.
Example 3: the improved Gabor filter is expressed using the following formula:
the method comprises the steps of carrying out a first treatment on the surface of the Wherein,is an improved Gabor filter at coordinate point of the image +.>Response at; />And->Respectively pixels in the image +.>Axis coordinate sum->The axis coordinates, but subjected to a rotation transformation consisting of the direction parameters of the filter +.>Controlling; />Is the frequency parameter of the filter; />And->Gabor filters are in +. >Standard deviation parameter of axis and Gabor filter in +.>Standard deviation parameters of the axes, respectively controlling the filter in +.>Axis direction and +.>Spatial resolution in the axial direction; />Is the phase offset parameter of the filter.
Specifically, coordinate points: this is where the filter is to be applied to the image. In filtering the image, the filter will be +/at each coordinate point>Performing operations at the location to capture an imageIs a local feature of (a).
Rotational transformation and direction parameters: by rotation transformation, coordinates->Transformed to obtain->And->. The angle of this rotation transformation is determined by the direction parameter of the filter +.>And (5) controlling. This is to accommodate features in different directions. For example, whenThe filter is sensitive in the horizontal direction when +.>When the filter is sensitive in the vertical direction.
Frequency parameter: this parameter controls the periodicity of the filter. Different frequency values may affect the perception of texture or edge features by the filter. Higher frequency values are suitable for detecting fine textures, while lower frequency values are suitable for detecting large scale structures.
Standard deviation parameterAnd->: these parameters control the filter at +.>Shaft and->Spatial resolution in the axial direction. A smaller standard deviation will result in a higher resolution and the filter will be able to detect details in the image more sensitively. A larger standard deviation results in a lower resolution and the filter is more suitable for detecting large scale structures.
Phase offset parameter: this parameter controls the phase of the filter response. The phase information is important in the filtering process as it can be used to represent different phases or directions of the feature. By adjusting->The filter may be more sensitive to the characteristics of a particular phase.
Cosine and sine functions: these functions are used to calculate the real and imaginary response of the filter, respectively. The cosine function is used to detect edge features in the image, while the sine function is used to detect texture features in the image. A combination of real and imaginary responses may be used to capture different types of features.
At each image coordinate pointThe response value of the filter is calculated from the given parameters. This response value may be used to enhance or suppress a particular texture or edge feature in the image. Through adjusting parameters, the filter can sense the characteristics of different directions, frequencies and spatial resolutions, so that the extraction and analysis of different types of characteristics in the image are realized. The application of Gabor filters in images is mainly used for texture analysis. Due to its multiparameter nature, the frequency parameter can be adjusted +.>And standard deviation parameter->And->To accommodate textures of different sizes and proportions. This enables it to capture details and texture features in the image, such as direction, density and thickness of the texture. Gabor filters can be used to extract important features in the image, such as edges and edge directions. By adjusting the direction parameter- >And phase offset parameter->The filter can be made to perceive the characteristics of different directions and phases in the image. This is very useful for tasks such as object recognition, object detection, and image classification. The real and imaginary responses are calculated from cosine and sine functions, respectively, so the real part of the Gabor filter is very sensitive to edge characteristics and can be used for edge detection. Edges are often important features of the object boundaries in the image. By adjusting standard deviation parameter->And->The resolution of the filter in the spatial domain can be controlled. A smaller standard deviation will result in a higher resolution suitable for detecting fine details, while a larger standard deviation will result in a lower resolution suitable for detecting large scale structures.
Example 4: let the car body image beThe vehicle body image feature obtained by the vehicle size detecting section +.>The expression is used as follows:
wherein,is convolution operation; the process of rotation transformation is expressed using the following formula:
specifically, convolution applies a Gabor filter to the vehicle body image by applying a convolution operation to the vehicle body image at each pixel positionThe response of the filter is calculated, generating a new image feature +.>. The convolution operation is actually a dot product of the local response of the filter with the image and then sliding the result over the image to obtain the whole new feature map. The response of a Gabor filter is influenced by its parameters. Frequency parameter- >Controlling the frequency selection of the filter, standard deviation parameter +.>And->Control the spatial resolution of the filter, direction parameters +.>Control the directional selection of the filter, phase offset parameter +.>The phase of the filter is controlled. The combination of these parameters determines the response of the filter to the body image. The filters may capture texture and feature information by responding to the image in different directions, at different frequencies, and at different scales. The rotation transformation is achieved by means of a coordinate transformation formula, which will add coordinates +.>Rotate to new coordinates +.>. This transformation is made of the orientation parameter +.>Controlled, and thus adaptable to different directional characteristics. By rotating the transform, the filter can perceive textures and edges at different angles in the image.
Example 5: the vehicle size detecting section detects the vehicle body image feature using a Marr-Hildreth edge detector, and the process of obtaining the edge feature includes: calculating a body image feature using the following formulaIs a gradient of (2):
wherein,and->Respectively indicate->At->And->Gradient in direction; the edge strength is calculated using the following formula:
wherein,representing edge strength; and calculating to obtain edge characteristics by using the following formula:
Wherein,is an edge feature.
In particular, body image features: this is the body image feature previously processed by Gabor filter, wherein +.>Representing pixel coordinates in the image. The body image features contain texture and edge information in the image.
Gradient calculation:
: this is the body image feature +.>Gradient in the horizontal direction (x-direction). Gradients refer to the rate of change of pixel values in an image and are typically used to detect edges in an image. +.>Representation pair->At->The partial derivative in direction, i.e. the rate of change along the x-axis. Amplitude of gradient->The magnitude of the gradient is indicated, whereas +.>Cosine values of the gradient direction are shown.
: this is the body image feature +.>Gradient in the vertical direction (y-direction). And->Similarly, it shows the rate of change of pixel values in the image, but at +.>In the direction of the vehicle. +.>Representation pair->At the position ofThe partial derivative in direction, i.e. the rate of change along the y-axis. Amplitude of gradient->The magnitude of the gradient is shownThe sine value of the gradient direction is shown.
These two gradient componentsAnd->For determining edge direction and edge intensity in the image.
Edge strength: this is based on +.>And->The computed gradient magnitude is used to represent the edge intensity in the image. Its calculation uses Euclidean distance formula +. >I.e. the square root of the sum of the squares of the two gradient components.
Edge feature: this value represents the response of the edge detector, commonly referred to as a laplace (Laplacian) or second order gradient. It is by calculating the edge intensity map +.>Is derived, representing the rate of change of edges in the image. When the pixel values in the image change at the edge, the edge feature +.>Extreme points are displayed, which correspond to the positions of the edges.
Gradient computation and edge detection are mainly used in example 5 to analyze edge information in the vehicle body image features. First, by calculationGradient component of->And->The direction and intensity of the edge is determined. Then, by calculating the edge intensity +.>Second derivative of>Edge locations in the image are detected. This process helps the vehicle size detecting section to recognize the outline and size characteristics of the vehicle, thereby judging whether the vehicle is overrun. Marr-Hildreteh edge detectors are commonly used to find edges and features in images.
Example 6: the edges in the body image are located by detecting zero crossing points of the edge features using the following formula:
the following formula is used to form a continuous edge line segment by connecting adjacent zero crossing points, thereby obtaining the vehicle contour:
A value of 1 representing the presence of an edge segment and a value of 0 representing the absence of an edge segment; />An edge value of 1 or-1 indicates that there is a zero crossing at that location, and a pixel value of 0 indicates that there is no zero crossing; />And->Is indicated at the current pixel position +.>Is the last pixel position of (a)And one pixel position on the left +.>Cross values at.
Specifically, first, an edge feature map of an image needs to be calculated, which is usually obtained by performing edge detection or gradient operation on an original image. The edge feature map contains edge intensity information of each pixel point in the image, and represents brightness or color change conditions in the image. In the edge feature map, each pixel point has an edge intensity value representing the luminance gradient or color gradient of the point. Gradient refers to the rate and direction of change of brightness or color. In zero-crossing detection, the sign change of the gradient value is of interest. The zero-crossing point refers to a pixel position on an edge where the edge feature value changes from positive to negative or from negative to positive near this position. This change indicates the presence of a turning point or transition region on the edge.
Specifically, the zero-crossing point detection process is as follows: for each pixel locationChecking gradient values in its neighbourhood, usually the pixel locations adjacent thereto, e.g. +.>、/>Etc. If edge feature value->Greater than zero (indicating that the gradient direction is positive) and the eigenvalue in the neighborhood +.>Or->Less than zero (indicating that the gradient direction has changed from positive to negative), will +.>Set to 1, indicating that a positive to negative edge feature change has occurred at the current location. If edge feature value->Less than zero (meaning the gradient direction is negative), and the eigenvalue in the neighborhood +.>Or->Above zero (indicating that the gradient direction has changed from negative to positive), then ∈>Set to-1, indicating that a negative to positive edge feature change has occurred at the current location. If neither of the above two conditions is satisfied, i.e. the sign of the gradient is unchanged, will +.>Set to 0, indicating that no edge feature change has occurred at the current location.
Zero-crossing detection review: in zero-crossing detection, zero-crossings in the image have been identified, which points generally correspond to the starting or ending position of the edge, as well as the turning point of the edge.
The calculation of the continuous edge line segments is to connect the zero crossing points to form the continuous edge line segments. Specifically, for each pixel location The following were checked: if->Indicating that the current position is a zero crossing and possibly the starting point of an edge.
Also check the last pixel position of the current positionAnd one pixel position on the left +.>Whether or not it has been marked as part of an edge line segment, i.e. +.>Or->. If the above condition is satisfied, i.e. the current position is the zero crossing point and at least one of its last pixel position or the left pixel position is also part of an edge line segment,/a +/>Set to 1, indicating that there is an edge line segment at the current location. The key to this process is to maintain edge continuity. By checking whether the last pixel position and the left pixel position have been marked as part of an edge line segment, the continuity of the edge line segment is ensured. Only when the current position is a zero crossing and its neighboring pixel positions already belong to the same edge will +.>Let 1 be the value.
Calculation of the continuous edge line segments ensures the continuity of the edge by connecting adjacent zero-crossing points, thereby forming continuous line segments of the vehicle contour. These successive line segments describe the contour shape of the vehicle, providing a basis for subsequent vehicle size detection. The principle of this process is to acquire vehicle profile information by maintaining continuity of edges to determine whether the vehicle is overrun.
Example 7: the license plate recognition part uses a Gaussian filter to carry out fuzzy processing on the license plate image, and the obtained fuzzy image isThe method comprises the steps of carrying out a first treatment on the surface of the Setting a plurality of different gradient directions +.>Extracting from the blurred image using the following formulaLicense plate image features:
wherein,is->License plate image feature in gradient direction, < >>Is a dirac delta function; />Absolute value representing gradient direction, +.>The upper limit value of (2) is equal to the number of license plate characters +.>The method comprises the steps of carrying out a first treatment on the surface of the Positioning license plate characters according to license plate image features by using a Markov random field through the following formula:
wherein,representing proportional operation, ++>The position variable is the position variable of license plate characters; />Indicating the gradient direction as +.>The energy function of the following license plate characters is defined as:
wherein,is the second order Frobenius norm.
In particular, gaussian filtering is a commonly used image processing technique for smoothing images and reducing noise. In this context, it is applied to license plate images to reduce image noise due to illumination variations, sensor noise, or other factors. Gaussian filtering is achieved by weighted averaging the pixels around each pixel. The weight of the filter is determined by a gaussian distribution function, the weight of the pixels farther from the center pixel is smaller. This allows for smoothing of the image, helping to remove noise and reduce detail. After blurring processing, for a plurality of different gradient directions Using the formula +.> To extract the features of license plate images. />Indicated as gradient direction +.>Lower license plate image feature, wherein->May represent different angles, typically a series of angle values, for covering different directional information. +.>Is a dirac delta function for detecting features in the image that match a particular gradient direction. In particular, it calculates a feature value for each pixel location in the imageTo determine if there are edges or texture features of the gradient direction. If a pixel in the image is in the gradient directionWith a strong change, the corresponding position is +.>The feature value will be relatively high, indicating that important feature information is present in this direction, which aids in subsequent character positioning and recognition. In general, the fuzzy processing improves the quality of license plate images through noise reduction and smoothing, and the feature extraction of license plate images utilizes the feature information in different gradient directions to search potential character edges or textures, so as to provide more informative data for subsequent character positioning. These features are very important for subsequent license plate character recognition because they help to accurately locate and recognize characters.
A markov random field is a probabilistic graph model that is used to model the relationships between random variables. In the context of license plate recognition, random variables The variables representing the character position, i.e. the position of the license plate character in the image. The modeling aims at determining the character position +.>Probability distribution of->I.e. given license plate image features->Under the condition of>Posterior probability distribution of (c). This posterior distribution describes the probability that a character may appear at different locations in the image.
In a Markov random field, the probability distribution of character positions is related to an energy function. This energy function is composed ofConstitution of->Representing different gradient directions. Each gradient direction>Are all corresponding to an energy item +.>Is defined as +.>I.e. character position->Is the square of the Frobenius norm. This norm measures the character position +.>Is equal to or greater than the cumulative error of (1) for each gradient direction>All have a weight +.>. Minimizing the energy function corresponds to finding the character position +.>So that errors in the direction of the different gradients are minimized. In other words, the position of the character should be matched to the feature information (gradient direction) in the image to minimize the overall energy.
Character position according to bayesian principlePosterior probability distribution ∈>In direct proportion to the energy function, i.e. This means that in a given characteristic information +.>Character position- >Is exponentially related to its corresponding energy value. By minimizing the energy function, the most probable value of the character position can be found, i.e. maximizing the posterior probability distribution +.>Is a position of (c). Thus, character positioning becomes a problem of finding character positions where energy is minimized. In summary, this step uses the probability distribution of the character positions modeled by the Markov random field to determine the character positions by minimizing the energy function. The energy function takes into account errors in the direction of the different gradients, which should be minimized by the position of the character to find the most likely character position. This process helps to locate license plate characters accurately, providing key information for subsequent character recognition.
Example 8: dictionary of license plate characters is learned by using sparse dictionary learning by the following formula, and the license plate characters are expressed as sparse linear combinations
;/>
Wherein:
wherein,is dictionary->Is a sparse coefficient>Is a regularization parameter, +.>For sparse components, each license plate character corresponds to one sparse component +.>
In particular, sparse dictionary learning is a machine learning technique for learning a representation of data, particularly when the data can be represented by sparse linear combinations. In this context, the data is an image of license plate characters. The goal of learning is to build a dictionary The dictionary contains a set of basis vectors representing different variants of license plate character images. Each basis vector represents a particular character or variant of a character. For each license plate character image, it can be expressed as dictionary +.>Sparse linear combinations of basis vectors in (a). This linear combination consists of coefficient vectors->Control (S)>Is a sparse vector with most elements being zero and only a few elements being non-zero. The expression is->Wherein->Is a license plate character image>Is dictionary->Is a coefficient vector. By adjusting->An optimal linear combination can be found to best reconstruct the original character image. +.>Representing sparse components of each character, wherein each character corresponds to one +>。/>Is a minimization problem by optimizing the dictionary +.>Coefficient vector->And regularization parameter ∈>To realize the method. The goal is to minimize reconstruction errors +.>At the same time add +.>Double->The term is regularized to encourage sparsity. This can be seen as forcing +.>Becoming sparse. By applying +/to each license plate character image>Can obtain sparse linear combinations of each character, denoted +. >. The purpose of this procedure is to convert license plate character images into dictionary +.>Linear combinations of basis vectors in the matrix for subsequent character recognition. Sparse representation allows features of characters to be compactly encoded and efficient for character recognition tasks.
Example 9: the license plate character is segmented using a character segmentation algorithm by the following formula:
wherein,is->The segmentation points of the individual characters.
Specifically, probability distribution of character positions: this part reflects the features based on license plate character images, especially the sparse representation of the characters +.>And gradient direction->In the case of a character position. />Representation ofIn a given gradient direction->In the case of (1) license plate character image->Is the probability of the character position. This probability distribution is based on the features extracted in the previous step, which tells which positions the boundaries of the character are most likely to occur.
Sparse representation of characters:/>Is->Sparse representation of individual characters is learned through the previous steps. It means that each position in the character image corresponds to the +.>The degree of the individual characters. The sparse representation contains the characteristic information of license plate characters and can be used for guiding the determination of the segmentation points. In the formula +. >Is calculated from the sparse representation of the character.
Regularization parameters: regularization parameter->In the formula, the importance of the sparse representation of the characters and the probability distribution information is balanced. It can adjust the preference of character segmentation so that the segmentation points are more accurate. Greater->The value will emphasize the sparse representation of the character, thereby focusing more on the wordCharacter characteristics. Less->The values will be more dependent on the probability distribution information and thus the probability distribution may be more considered. />
The maximization process: the maximization process in the formula is to find a position that maximizes the product between the character position probability distribution and the character sparse representation, which will be the segmentation point of the character. In particular the number of the elements,representing the->Applying a character sparse representation +.>A maximum value of the product of the character position probability distribution and the character sparse representation is then calculated.
Action of the segmentation points: dividing pointIs at the boundary of the character, which divides the character image into two parts for subsequent character recognition. Thus, each character can be identified independently, and the accuracy of character identification is improved. By considering the character position probability distribution and the character sparse representation, the formula can more accurately determine the segmentation points, thereby increasing the accuracy and the robustness of character segmentation.
Summarizing, the principle of this formula is to determine the segmentation points of a character by maximizing the product between the probability distribution of the character position and the sparse representation of the character. The formula comprehensively considers the characteristics and the position information of the characters to improve the accuracy of character segmentation and provide accurate input for subsequent character recognition. This isOne of the key steps in the character segmentation algorithm.
Example 10: for each segmented character, the character is identified using a dictionary learned from dictionary by the following formula:
wherein,for sparse component->Is used for identifying sparse coefficients; by comparison->And finishing the recognition of each character by comparing with a pre-established comparison table of the sparse coefficient and the characters.
Specifically, after the character segmentation step, the license plate character is segmented into individual character units, each character unit representing a candidate character. The goal of character recognition is to determine what the characters in each character unit are, i.e., character recognition. In the formulaIs used to represent sparse component->Is used to identify sparse coefficients. By minimizing the objective function->To calculate +.>Wherein->Is a dictionary learned in advance, < >>Is a sparse coefficient>Is a regularization parameter. First term of objective function- >Representation through a linear combination dictionary->Basis in (2) reconstructing sparse components +.>Thereby finding the most suitable representationSparse coefficient of->. Second term of objective function->Is regularization term for controlling sparsity, encouraging outcome +.>With fewer non-zero coefficients.
Once the sparse component is obtainedIs>It may be compared with a pre-established look-up table of sparse coefficients and characters. The pre-established sparse coefficient and character comparison table stores the sparse coefficient of each character, and the sparse coefficients are obtained through training and learning. By comparison->And comparing the sparse coefficients in the table, the method can determine what the character represented by the segmented character is. The function of this formula is to model the segmented character with a predefined characterThe patterns are matched to identify each character. By learning the dictionary and establishing a comparison table of the sparse coefficient and the characters, the efficient recognition of the characters can be realized. This formula is calculated by calculating the recognition sparsity coefficient of the character +.>And comparing the character model comparison table with a character model comparison table, and realizing the recognition of the segmented characters. This is one of the key steps in the character recognition process, helping to accurately recognize each character on the license plate.
The present invention has been described in detail above. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present invention and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.

Claims (10)

1. The utility model provides an wisdom highway overrun vehicle dynamic monitoring management system which characterized in that, the system includes: an image acquisition section, a vehicle size detection section, a license plate recognition section, and a monitoring management section; the image acquisition section for acquiring an image of a vehicle traveling on a target highway, comprising: license plate images and vehicle body images; the vehicle size detection part is used for identifying the vehicle body image so as to judge whether the vehicle corresponding to the vehicle body image exceeds the limit, and specifically comprises the following steps: extracting features of a vehicle body image through an improved Gabor filter to obtain vehicle body image features, detecting the vehicle body image features by using a Marr-Hildrth edge detector to obtain edge features, detecting zero crossing points of the edge features to position edges in the vehicle body image, forming continuous edge line segments by connecting adjacent zero crossing points to obtain a vehicle contour, calculating size parameters of the vehicle according to the extracted vehicle contour, and judging whether the vehicle is out of limit or not; the license plate recognition part is used for carrying out license plate recognition according to the license plate image under the condition that the vehicle size detection part judges that the vehicle exceeds the limit, and specifically comprises the following steps: carrying out fuzzy processing on the license plate image by using a Gaussian filter to obtain a fuzzy image, extracting license plate image features from the fuzzy image, positioning license plate characters by using a Markov random field according to the license plate image features, learning a dictionary of the license plate characters by using sparse dictionary learning, representing the license plate characters as sparse linear combination, dividing the license plate characters by using a character dividing algorithm, recognizing the characters by using a dictionary obtained by dictionary learning for each divided character, and splicing the recognized characters into license plate numbers according to the sequence; the monitoring management part is used for acquiring the corresponding contact information of the vehicle according to the license plate number, automatically sending out-of-limit prompting information according to the contact information so as to prompt the vehicle to out-of-limit and adding out-of-limit records of the vehicle.
2. The intelligent highway overrun vehicle dynamic monitoring and management system according to claim 1, wherein the image acquisition section comprises two independently operated image acquisition devices, a first image acquisition device and a second image acquisition device, respectively; the first image acquisition device and the second image device acquire license plate images and vehicle body images of vehicles running on the target expressway at the same time according to the same frequency respectively, and the license plate images and the vehicle body images acquired at the same time are associated by using unique identifiers; in the case where the vehicle size detecting section judges that the vehicle is out of limit based on the vehicle body image, the license plate recognizing section obtains a license plate image associated with the vehicle body image based on the unique identifier, and performs license plate recognition.
3. The intelligent highway overrun vehicle dynamic monitoring management system of claim 2 wherein the modified Gabor filter is expressed using the following formula:
the method comprises the steps of carrying out a first treatment on the surface of the Wherein (1)>Is an improved Gabor filter at coordinate point of the image +.>Response at; />And->Respectively pixels in the image +.>Axis coordinate sum->The axis coordinates, but subjected to a rotation transformation consisting of the direction parameters of the filter +. >Controlling; />Is the frequency parameter of the filter; />And->Gabor filters are in +.>Standard deviation parameter of axis and Gabor filter in +.>Standard deviation parameters of the axes, respectively controlling the filter in +.>Axis direction and +.>Spatial resolution in the axial direction; />Is the phase offset parameter of the filter.
4. The intelligent highway overrun vehicle dynamic monitoring and management system according to claim 2, wherein the vehicle body image is set asThe vehicle body image feature obtained by the vehicle size detecting section +.>The expression is used as follows:
wherein,is convolution operation; the process of rotation transformation is expressed using the following formula:
5. the intelligent highway overrun vehicle dynamic monitoring and management system according to claim 4, wherein the vehicle size detecting section detects the vehicle body image feature using Marr-Hildreth edge detector, and the process of obtaining the edge feature comprises: calculating a body image feature using the following formulaIs a gradient of (2):
wherein,and->Respectively indicate->At->And->Gradient in direction; the edge strength is calculated using the following formula:
wherein,representing edge strength; and calculating to obtain edge characteristics by using the following formula:
Wherein,is an edge feature.
6. The intelligent highway overrun vehicle dynamics monitoring management system of claim 5, wherein edges in the vehicle body image are located by detecting zero crossing points of edge features using the formula:
the following formula is used to form a continuous edge line segment by connecting adjacent zero crossing points, thereby obtaining the vehicle contour:
a value of 1 representing the presence of an edge segment and a value of 0 representing the absence of an edge segment; />An edge value of 1 or-1 indicates that there is a zero crossing at that location, and a pixel value of 0 indicates that there is no zero crossing; />And->Is indicated at the current pixel position +.>Is the last pixel position of (a)And one pixel position on the left +.>Cross values at.
7. The dynamic monitoring and management system for intelligent expressway overrun vehicle as claimed in claim 6, wherein said license plate recognition unit uses a Gaussian filter to blur the license plate image, and the blurred image isThe method comprises the steps of carrying out a first treatment on the surface of the Setting a plurality of different gradient directions +.>License plate image features are extracted from the blurred image using the following formula:
wherein,is->License plate image feature in gradient direction, < > >Is a dirac delta function; />Absolute value representing gradient direction, +.>The upper limit value of (2) is equal to the number of license plate characters +.>The method comprises the steps of carrying out a first treatment on the surface of the Positioning license plate characters according to license plate image features by using a Markov random field through the following formula:
wherein,representing proportional operation, ++>The position variable is the position variable of license plate characters; />Indicating the gradient direction as +.>The energy function of the following license plate characters is defined as:
wherein,is the second order Frobenius norm.
8. The intelligent highway overrun vehicle dynamic monitoring and management system of claim 7, wherein the dictionary of license plate characters is learned using sparse dictionary learning by the following formula, the license plate characters are represented as sparse linear combinations
Wherein:
wherein,is dictionary->Is a sparse coefficient>Is a regularization parameter, +.>For sparse components, each license plate character corresponds to one sparse component +.>
9. The intelligent highway overrun vehicle dynamic monitoring management system of claim 8, wherein the license plate character is segmented using a character segmentation algorithm by the formula:
wherein,is->The segmentation points of the individual characters.
10. The intelligent highway overrun vehicle dynamic monitoring management system of claim 9, wherein for each segmented character, dictionary learning is used to identify the character by the following formula:
Wherein,for sparse component->Is used for identifying sparse coefficients;by comparison->And finishing the recognition of each character by comparing with a pre-established comparison table of the sparse coefficient and the characters.
CN202311656951.XA 2023-12-06 2023-12-06 Dynamic monitoring management system for intelligent expressway overrun vehicle Active CN117351439B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311656951.XA CN117351439B (en) 2023-12-06 2023-12-06 Dynamic monitoring management system for intelligent expressway overrun vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311656951.XA CN117351439B (en) 2023-12-06 2023-12-06 Dynamic monitoring management system for intelligent expressway overrun vehicle

Publications (2)

Publication Number Publication Date
CN117351439A true CN117351439A (en) 2024-01-05
CN117351439B CN117351439B (en) 2024-02-20

Family

ID=89363595

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311656951.XA Active CN117351439B (en) 2023-12-06 2023-12-06 Dynamic monitoring management system for intelligent expressway overrun vehicle

Country Status (1)

Country Link
CN (1) CN117351439B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183427A (en) * 2007-12-05 2008-05-21 浙江工业大学 Computer vision based peccancy parking detector
CN101976341A (en) * 2010-08-27 2011-02-16 中国科学院自动化研究所 Method for detecting position, posture, and three-dimensional profile of vehicle from traffic images
CN103761531A (en) * 2014-01-20 2014-04-30 西安理工大学 Sparse-coding license plate character recognition method based on shape and contour features
CN104036323A (en) * 2014-06-26 2014-09-10 叶茂 Vehicle detection method based on convolutional neural network
CN104236478A (en) * 2014-09-19 2014-12-24 山东交通学院 Automatic vehicle overall size measuring system and method based on vision
KR20150111611A (en) * 2014-03-26 2015-10-06 한국전자통신연구원 Apparatus and method for detecting vehicle candidate
CN106650553A (en) * 2015-10-30 2017-05-10 比亚迪股份有限公司 License plate recognition method and system
CN106710231A (en) * 2017-01-23 2017-05-24 上海良相智能化工程有限公司 Oversize and overload nonlocal law enforcement system
CN107220603A (en) * 2017-05-18 2017-09-29 惠龙易通国际物流股份有限公司 Vehicle checking method and device based on deep learning
CN207817965U (en) * 2017-08-30 2018-09-04 中交第二航务工程勘察设计院有限公司 A kind of highway overload remediation system
CN109993056A (en) * 2019-02-25 2019-07-09 平安科技(深圳)有限公司 A kind of method, server and storage medium identifying vehicle violation behavior
CN111738228A (en) * 2020-08-04 2020-10-02 杭州智诚惠通科技有限公司 Multi-view vehicle feature matching method for hypermetrological evidence chain verification
CN112419741A (en) * 2020-12-18 2021-02-26 苏州高新有轨电车集团有限公司 Intelligent overrun detection device for road and rail traffic intersection and detection method thereof
CN112949636A (en) * 2021-03-31 2021-06-11 上海电机学院 License plate super-resolution identification method and system and computer readable medium
CN114387595A (en) * 2022-01-17 2022-04-22 胡水花 Automatic cargo identification method and device based on cargo splicing identification
WO2023155483A1 (en) * 2022-02-17 2023-08-24 广州广电运通金融电子股份有限公司 Vehicle type identification method, device, and system
CN116818009A (en) * 2023-06-09 2023-09-29 北京石油化工学院 Truck overrun detection system and method

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183427A (en) * 2007-12-05 2008-05-21 浙江工业大学 Computer vision based peccancy parking detector
CN101976341A (en) * 2010-08-27 2011-02-16 中国科学院自动化研究所 Method for detecting position, posture, and three-dimensional profile of vehicle from traffic images
CN103761531A (en) * 2014-01-20 2014-04-30 西安理工大学 Sparse-coding license plate character recognition method based on shape and contour features
KR20150111611A (en) * 2014-03-26 2015-10-06 한국전자통신연구원 Apparatus and method for detecting vehicle candidate
CN104036323A (en) * 2014-06-26 2014-09-10 叶茂 Vehicle detection method based on convolutional neural network
CN104236478A (en) * 2014-09-19 2014-12-24 山东交通学院 Automatic vehicle overall size measuring system and method based on vision
CN106650553A (en) * 2015-10-30 2017-05-10 比亚迪股份有限公司 License plate recognition method and system
CN106710231A (en) * 2017-01-23 2017-05-24 上海良相智能化工程有限公司 Oversize and overload nonlocal law enforcement system
CN107220603A (en) * 2017-05-18 2017-09-29 惠龙易通国际物流股份有限公司 Vehicle checking method and device based on deep learning
CN207817965U (en) * 2017-08-30 2018-09-04 中交第二航务工程勘察设计院有限公司 A kind of highway overload remediation system
CN109993056A (en) * 2019-02-25 2019-07-09 平安科技(深圳)有限公司 A kind of method, server and storage medium identifying vehicle violation behavior
CN111738228A (en) * 2020-08-04 2020-10-02 杭州智诚惠通科技有限公司 Multi-view vehicle feature matching method for hypermetrological evidence chain verification
CN112419741A (en) * 2020-12-18 2021-02-26 苏州高新有轨电车集团有限公司 Intelligent overrun detection device for road and rail traffic intersection and detection method thereof
CN112949636A (en) * 2021-03-31 2021-06-11 上海电机学院 License plate super-resolution identification method and system and computer readable medium
CN114387595A (en) * 2022-01-17 2022-04-22 胡水花 Automatic cargo identification method and device based on cargo splicing identification
WO2023155483A1 (en) * 2022-02-17 2023-08-24 广州广电运通金融电子股份有限公司 Vehicle type identification method, device, and system
CN116818009A (en) * 2023-06-09 2023-09-29 北京石油化工学院 Truck overrun detection system and method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SUN HUI 等: "Research on the Safety Governance of Overload and Overrun Transportation on Urban Expressways Based on Big Data Analysis—Taking Huai\'an City as an Example", INTERNATIONAL CONFERENCE ON INTERNET OF THINGS AND SMART CITY, pages 1 - 10 *
樊思萌: "面向车辆识别的目标检测与分割模块设计", 中国优秀硕士学位论文全文数据库 工程科技II辑, pages 034 - 723 *
闫海涛: "论高速公路入口处治超机电系统及安装", 交通科技与管理, vol. 4, no. 12, pages 126 - 128 *

Also Published As

Publication number Publication date
CN117351439B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
Wang et al. Asphalt pavement pothole detection and segmentation based on wavelet energy field
Gao et al. Detection and segmentation of cement concrete pavement pothole based on image processing technology
Li et al. Lane detection based on connection of various feature extraction methods
CN111814686A (en) Vision-based power transmission line identification and foreign matter invasion online detection method
CN110349207A (en) A kind of vision positioning method under complex environment
CN107273802A (en) A kind of detection method and device of railroad train brake shoe drill ring failure
CN113240623B (en) Pavement disease detection method and device
CN109117855A (en) Abnormal power equipment image identification system
CN110348307B (en) Path edge identification method and system for crane metal structure climbing robot
CN114494161A (en) Pantograph foreign matter detection method and device based on image contrast and storage medium
Li et al. Ship target detection and recognition method on sea surface based on multi-level hybrid network
Ryu et al. Image edge detection using fuzzy c-means and three directions image shift method
Duan et al. Real time road edges detection and road signs recognition
CN117197700B (en) Intelligent unmanned inspection contact net defect identification system
Fang et al. Towards real-time crack detection using a deep neural network with a Bayesian fusion algorithm
CN117351439B (en) Dynamic monitoring management system for intelligent expressway overrun vehicle
CN115205564B (en) Unmanned aerial vehicle-based hull maintenance inspection method
Pan et al. An efficient method for skew correction of license plate
Cooke A fast automatic ellipse detector
Si-ming et al. Moving shadow detection based on Susan algorithm
CN115984186A (en) Fine product image anomaly detection method based on multi-resolution knowledge extraction
CN114283157A (en) Ellipse fitting-based ellipse object segmentation method
CN113298725A (en) Correction method for superposition error of ship icon image
Chong et al. Fabric Defect Detection Method Based on Projection Location and Superpixel Segmentation
Petwal et al. Computer vision based real time lane departure warning system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant