CN114399744A - Vehicle type recognition method and device, electronic equipment and storage medium - Google Patents

Vehicle type recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114399744A
CN114399744A CN202111596011.7A CN202111596011A CN114399744A CN 114399744 A CN114399744 A CN 114399744A CN 202111596011 A CN202111596011 A CN 202111596011A CN 114399744 A CN114399744 A CN 114399744A
Authority
CN
China
Prior art keywords
vehicle
point cloud
type
image
axle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111596011.7A
Other languages
Chinese (zh)
Inventor
许军立
冯洪亮
胡小波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LeiShen Intelligent System Co Ltd
Original Assignee
LeiShen Intelligent System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LeiShen Intelligent System Co Ltd filed Critical LeiShen Intelligent System Co Ltd
Priority to CN202111596011.7A priority Critical patent/CN114399744A/en
Publication of CN114399744A publication Critical patent/CN114399744A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a vehicle type recognition method, a vehicle type recognition device, electronic equipment and a storage medium, wherein the method comprises the following steps: in the running process of a vehicle, acquiring a vehicle image set acquired by a camera device and a point cloud data set including the vehicle acquired by a laser radar; constructing a three-dimensional point cloud model according to the vehicle image set and the point cloud data set, and extracting vehicle point cloud from the three-dimensional point cloud model; processing the vehicle point cloud and the vehicle image set, and determining a preliminary vehicle type, a vehicle body length and axle information of the vehicle; the axle information comprises the number of axles and the axle type; and determining the final vehicle type of the vehicle according to the preliminary vehicle type, the vehicle body length and the axle information. The invention provides a scheme for recognizing the vehicle type by using the camera device and the laser radar based on the initial vehicle type, the length of the vehicle body, the number of axles and the axle type, and the accuracy and the reliability of the vehicle type recognition result can be improved.

Description

Vehicle type recognition method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of image recognition, in particular to a vehicle type recognition method and device, electronic equipment and a storage medium.
Background
The intelligent transportation is a main development direction of the transportation industry in the future and is a hot research subject in the field of transportation at the present stage, and the vehicle type identification is a key technology in the field of intelligent transportation. At present, in the prior art, a camera is generally adopted to collect vehicle images to identify vehicle types, and the accuracy is low. Improvements are needed.
Disclosure of Invention
The embodiment of the invention provides a vehicle type identification method, a vehicle type identification device, electronic equipment and a storage medium, which can improve the accuracy and reliability of a vehicle type identification result.
In a first aspect, an embodiment of the present invention provides a vehicle type identification method, where the method includes:
in the running process of a vehicle, acquiring a vehicle image set acquired by a camera device and a point cloud data set including the vehicle acquired by a laser radar; the camera device and the laser radar are deployed on one side of a road;
constructing a three-dimensional point cloud model according to the vehicle image set and the point cloud data set, and extracting vehicle point cloud from the three-dimensional point cloud model;
processing the vehicle point cloud and the vehicle image set, and determining a preliminary vehicle type, a vehicle body length and axle information of the vehicle; wherein the axle information includes the number of axles and the axle type;
and determining the final vehicle type of the vehicle according to the preliminary vehicle type, the vehicle body length and the axle information.
In a second aspect, an embodiment of the present invention further provides a vehicle type recognition apparatus, where the apparatus includes:
the data acquisition module is used for acquiring a vehicle image set acquired by the camera device and a point cloud data set including the vehicle acquired by the laser radar in the vehicle driving process; the camera device and the laser radar are deployed on one side of a road;
the model building module is used for building a three-dimensional point cloud model according to the vehicle image set and the point cloud data set and extracting vehicle point cloud from the three-dimensional point cloud model;
the first determining module is used for processing the vehicle point cloud and the vehicle image set and determining a preliminary vehicle type, a vehicle body length and axle information of the vehicle; wherein the axle information includes the number of axles and the axle type;
and the second determining module is used for determining the final vehicle type of the vehicle according to the preliminary vehicle type, the vehicle body length and the axle information.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the vehicle type recognition method as provided in the first aspect.
In a fourth aspect, embodiments of the present invention also provide a storage medium including computer-executable instructions, which when executed by a computer processor, are configured to perform the vehicle type identification method as provided in the first aspect.
In the process of vehicle running, a vehicle image set acquired by a camera device and a point cloud data set including a vehicle acquired by a laser radar are acquired; constructing a three-dimensional point cloud model according to the vehicle image set and the point cloud data set, and extracting vehicle point cloud from the three-dimensional point cloud model; processing the vehicle point cloud and the vehicle image set, and determining a preliminary vehicle type, a vehicle body length and axle information comprising the number of axles and the axle type of the vehicle; the final vehicle type of the vehicle is determined according to the preliminary vehicle type, the vehicle body length and the axle information, and vehicle type recognition is performed by introducing the vehicle body length, the axle number and the axle type, so that the accuracy and the reliability of a vehicle type recognition result can be improved.
Drawings
Fig. 1A is a flowchart of a vehicle type recognition method according to an embodiment of the present invention;
fig. 1B is a schematic diagram of a vehicle type identification application scenario according to an embodiment of the present invention;
fig. 2 is a flowchart of a vehicle type recognition method according to a second embodiment of the present invention;
fig. 3 is a flowchart of a vehicle type recognition method according to a third embodiment of the present invention;
fig. 4 is a flowchart of a vehicle type recognition method according to a fourth embodiment of the present invention;
fig. 5 is a system architecture diagram for vehicle type recognition according to a fifth embodiment of the present invention;
fig. 6 is a schematic structural diagram of a vehicle type recognition apparatus in a sixth embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device in a seventh embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1A is a flowchart of a vehicle type recognition method according to an embodiment of the present invention, and fig. 1B is a schematic diagram of a vehicle type recognition application scenario according to an embodiment of the present invention. The apparatus may be configured in an electronic device providing a vehicle type recognition service, as shown in fig. 1A-1B, the method for vehicle type recognition provided in this embodiment specifically includes:
s101, in the running process of the vehicle, a vehicle image set collected by a camera device and a point cloud data set containing the vehicle collected by a laser radar are obtained.
Wherein, camera device and lidar dispose in road one side. The camera device is an image acquisition device, such as a snapshot camera, for acquiring an image of the vehicle. The vehicle image set may include at least one vehicle image captured by the camera device. The laser radar is a radar system that detects the position of an object by emitting a laser beam, and optionally, the laser radar in this embodiment may be a multi-line laser radar. The point cloud data set is a point set consisting of a group of points which contain three-dimensional coordinates and are acquired by a laser radar, and can be used for representing the shape of the outer surface of an object. The three-dimensional space geometric position information of each point can be represented by (x, y, z), and the point cloud data can also represent the reflected light intensity of one point. It should be noted that the point cloud data set of this embodiment is point cloud data acquired by the laser radar in the process that the vehicle has traveled through the laser radar scanning area.
As shown in fig. 1B, the camera and the lidar may be deployed on a side of the road, and may be spaced apart from each other by a distance, such as 4-6 meters. Preferably, the imaging device may be disposed in front of the laser radar in the vehicle traveling direction.
Optionally, the image of the vehicle may be acquired in real time or at regular time by using the camera device, and the images of the vehicle acquired at each time are sorted and combined to obtain a vehicle image set acquired by the camera device. The laser radar can be utilized, when the vehicle runs to the detection range of the laser radar, the vehicle is scanned in real time or at regular time, point cloud data containing the vehicle are collected, the point cloud data at each collection moment are comprehensively arranged, and a point cloud data set of the vehicle is obtained.
Optionally, referring to fig. 1B, the camera device and the lidar are disposed on one side of the road, and when the vehicle travels towards the lidar and the camera device, the vehicle first passes through the lidar and then passes through the camera device. Because camera device's collection scope is greater than laser radar's detection range, when laser radar did not gather the point cloud data of vehicle, camera device has can gather the vehicle image, consequently at the vehicle to laser radar and camera device in-process of traveling, the camera device of vehicle gathers the vehicle image earlier, then when the vehicle traveled the detection range of laser radar, the laser radar reacquires the point cloud data set that contains the vehicle.
S102, constructing a three-dimensional point cloud model according to the vehicle image set and the point cloud data set, and extracting vehicle point cloud from the three-dimensional point cloud model.
The three-dimensional point cloud model is a three-dimensional environment model constructed by point cloud data, and the point cloud data at least comprises vehicle point cloud, ground object point cloud and the like. The vehicle point cloud is point cloud data acquired by detecting a vehicle body by a laser radar emission beam; the ground point cloud refers to point cloud data acquired by detecting a laser radar emission beam on the ground. The ground object point cloud refers to point cloud data acquired by detecting ground objects on the ground, such as road signs, isolation zones and the like by laser radar emitted light beams.
Optionally, the point cloud position information corresponding to each frame of point cloud data acquired by the laser radar when each frame of point cloud data is acquired by the laser radar may be calculated according to each frame of vehicle image in the vehicle image set, or the vehicle image set and the point cloud data set acquired by the laser radar, and the three-dimensional environment modeling corresponding to the point cloud data set may be further constructed according to each frame of point cloud data acquired by the laser radar and the point cloud position information corresponding to the each frame of point cloud data. The vehicle image set and the point cloud data set can be input into a pre-trained neural network model, the neural network model executes modeling operation based on input data, and a constructed three-dimensional point cloud model and the like are output.
Optionally, after the three-dimensional point cloud model is constructed, the vehicle point cloud needs to be extracted from the three-dimensional point cloud model. There are many specific vehicle point cloud extraction methods, and this embodiment is not limited.
One possible implementation may be to detect and extract a vehicle point cloud from the three-dimensional point cloud model using an image feature extraction algorithm.
Another possible implementation may be to extract the vehicle point cloud from the three-dimensional point cloud model based on a pre-trained neural network model for detecting the vehicle point cloud.
Another possible implementation mode may also be that ground point clouds are removed from the three-dimensional point cloud model, and the remaining point clouds are clustered to obtain vehicle point clouds. For example, a RANSAC (Random Sample Consensus) algorithm is used to remove ground point clouds irrelevant to the vehicle point clouds in the three-dimensional point cloud model, and then an euclidean clustering algorithm is used to perform clustering processing on the remaining point clouds to obtain the vehicle point clouds. According to the implementation mode, after the ground point cloud in the three-dimensional point cloud model is removed, clustering is carried out to obtain the vehicle point cloud, irrelevant data can be removed, accurate vehicle point cloud can be obtained, and accurate and reliable vehicle types can be further identified conveniently and subsequently based on the accurate vehicle point cloud.
S103, processing the vehicle point cloud and the vehicle image set, and determining a preliminary vehicle type, a vehicle body length and axle information of the vehicle.
The axle information may include the number of axles and the axle type, among others. The number of axles is that the vehicle is provided with wheels at two ends, and the number of axles is at least 1. The axle type is used for representing the type of the axle and can be measured by the number of single-end tires of the axle, wherein the number of the single-end tires of the axle refers to the number of tires of wheels at one end of the axle. For example, if the axle is 2, the number of single-ended tires of the first axle is 1, and the number of single-ended tires of the second axle is 2, the axle type of the axle is 12; if the axle is 3, the number of single tires for the first axle is 1, and the number of single tires for the second and third axles is 2, the axle type of the axle is 122. The preliminary vehicle type of the vehicle refers to the type of the vehicle which is preliminarily identified, such as a passenger car, a bus, a truck, a special operation vehicle and the like. The length of the vehicle body refers to the length from the head of the vehicle to the tail of the vehicle.
Optionally, in this embodiment, after performing joint analysis on the vehicle point cloud and the vehicle image set, outputting a preliminary vehicle type, a vehicle body length, and axle information of the vehicle; it is also possible to determine body length and axle information based on the vehicle point cloud, a preliminary vehicle of the vehicle based on the vehicle image set.
Specifically, the preliminary vehicle type, the vehicle body length and the axle information of the vehicle can be determined based on the vehicle point cloud and the vehicle image set through a preset image processing algorithm (such as a vehicle type recognition algorithm, a vehicle body recognition algorithm and an axle recognition algorithm); or determining the preliminary model, the length of the body and the axle information of the vehicle based on the vehicle point cloud and the vehicle image set through a pre-trained neural network model. Optionally, when the preliminary vehicle type, the vehicle body length, and the axle information of the vehicle are determined by the neural network model, the functions of vehicle type recognition, vehicle body recognition, and axle recognition may be integrated into one neural network model or a plurality of neural network models.
Optionally, if the preliminary model, the body length and the axle information of the vehicle are determined through a pre-trained neural network model, the specific implementation manner of the scheme may be: processing the vehicle point cloud through a first neural network model to determine the vehicle body length and the axle information of the vehicle; and processing the vehicle image set through the second neural network model to determine a preliminary vehicle type of the vehicle.
The first neural network model is a neural network model with a vehicle body and axle identification function; the second neural network model is a neural network model with preliminary vehicle type identification. The first neural network model and the second neural network model may be the same or different.
Optionally, the acquired three-dimensional vehicle point cloud may be directly input to a first neural network model, the first neural network model may process the input vehicle point cloud based on an algorithm during training, predict and output the vehicle body length and the axle information of the vehicle (i.e., the number of axles and the axle type determined by the number of single-ended tires of the axles), may sort the acquired vehicle image set, select at least one vehicle image in the vehicle image set according to a certain screening rule, input to a second neural network model, perform vehicle type recognition on the input vehicle image by the second neural network model, and output a preliminary vehicle type predicted for the vehicle. The screening rule may be to randomly select at least one of the vehicle regions, or to select at least one of the vehicle regions that is the clearest and complete.
It should be noted that different neural network models are adopted to predict the preliminary vehicle type, the vehicle body length and the axle information according to different data, so that the accuracy of the prediction result is greatly improved, and the accuracy and the reliability of the subsequent vehicle type recognition result are further guaranteed.
And S104, determining the final vehicle type of the vehicle according to the preliminary vehicle type, the vehicle body length and the axle information.
The final vehicle type is obtained by further dividing the preliminary vehicle to obtain more detailed vehicle type information, and the final vehicle type not only contains the use types of the vehicle (such as passenger cars, buses, trucks and the like), but also comprises finer-grained information under each use type. For example, in the case of the category of trucks, the following finer grained information may also include but is not limited to: vehicle type information (such as a car or a train) and axle class information (such as a first class, a second class or a third class).
Optionally, the determined preliminary vehicle type, the determined vehicle body length and the determined axle information may be input into a preset final vehicle type determination model, and the model may analyze the input information based on an algorithm during training, predict and output a final vehicle type of the vehicle. Or comparing the preliminary vehicle type, the vehicle body length and the axle information with the judgment rules of various vehicle types according to the preset judgment rules of various vehicle types to determine the final vehicle type of the vehicle, wherein for example, if the preliminary vehicle type is a truck, the vehicle body length is more than 6m, the number of axles is 2, and the axle type is 12, the final vehicle type is determined to be a second-class truck; if the initial vehicle type is a truck, the length of the vehicle body is more than 6m, the number of axles is 3, and the axle type is 122, determining that the final vehicle type is a three-class train; and if the initial model is a truck, the length of the truck body is more than 6m, the number of axles is 3, and the axle type is 15 or 122, determining that the final model is the three types of trucks.
In the process of vehicle running, a vehicle image set acquired by a camera device and a point cloud data set including a vehicle acquired by a laser radar are acquired; constructing a three-dimensional point cloud model according to the vehicle image set and the point cloud data set, and extracting vehicle point cloud from the three-dimensional point cloud model; processing the vehicle point cloud and the vehicle image set, and determining a preliminary vehicle type, a vehicle body length and axle information comprising the number of axles and the axle type of the vehicle; and determining the final vehicle type of the vehicle according to the preliminary vehicle type, the vehicle body length and the axle information. Vehicle type recognition is carried out by introducing the length of the vehicle body, the number of axles and the axle type, so that the accuracy and the reliability of a vehicle type recognition result can be improved, and a new solution is provided for accurately recognizing the vehicle type of the vehicle.
Preferably, after determining the final model of the vehicle, the final model of the vehicle may be transmitted to the vehicle management system, so that the vehicle management system verifies the model information reported by the vehicle based on the final model of the vehicle.
The vehicle management system may be a system that needs to obtain a vehicle type recognition result and perform different operations. For example, it may include a toll booth system or a vehicle management system, etc. The vehicle type information reported by the vehicle may be reported to a vehicle management system by the vehicle through an On Board Unit (OBU) of the vehicle.
Optionally, after the final vehicle type of the vehicle is determined, the final vehicle type of the vehicle may be transmitted to the vehicle management system, so that the final vehicle type result determined by the vehicle management system based on the vehicle type identification method described in this embodiment is compared with the vehicle type information reported by the vehicle based on the vehicle electronic tag, and if the final vehicle type result is consistent with the vehicle type information reported by the vehicle based on the vehicle electronic tag, it is indicated that the reported information of the vehicle is correct. By the method, the vehicle type false alarm behavior possibly existing in the vehicle can be effectively monitored based on the accurate and reliable vehicle type recognition result.
Example two
Fig. 2 is a flowchart of a vehicle type identification method according to a second embodiment of the present invention, and in this embodiment, based on the above embodiment, a detailed explanation is further performed on "building a three-dimensional point cloud model according to a vehicle image set and a point cloud data set", and as shown in fig. 2, the vehicle type identification method according to this embodiment specifically includes:
s201, in the vehicle running process, a vehicle image set collected by a camera device and a point cloud data set containing a vehicle collected by a laser radar are obtained.
S202, determining the vehicle running speed according to the vehicle image set and the point cloud data set.
The driving speed of the vehicle refers to the driving speed of the vehicle corresponding to the time when the camera device and the laser radar collect each frame of data. Optionally, the camera device and the lidar may acquire data at the same acquisition frequency.
Alternatively, one possible embodiment: the vehicle image set and the point cloud data set can be input into a preset vehicle speed determination model, the model can analyze input data based on an algorithm during training, and vehicle running speed corresponding to each frame of vehicle image and point cloud data collection time is predicted and output.
Another possible implementation: the vehicle image set and the point cloud data set can be processed by using a related speed measurement algorithm to determine the vehicle running speed; for example, an Optical flow method (Optical flow) may be adopted, and when the laser radar does not acquire the vehicle point cloud data, the driving speed of the vehicle is determined based on the state relationship of the vehicle between two adjacent frames of images in the vehicle image set only according to the vehicle image set acquired by the camera device; when the vehicle point cloud data is acquired by the laser radar, the vehicle image set acquired by the camera device is further combined with the point cloud data acquired by the laser radar, and the vehicle running speed is determined by adopting an optical flow method and a point cloud matching algorithm.
S203, determining vehicle position information corresponding to each frame of point cloud data in the point cloud data set according to the vehicle running speed and the laser radar parameters.
The lidar parameters may include a scanning frequency of the lidar and position information of the lidar. The vehicle position information refers to the position coordinate information of the vehicle when the laser radar collects each frame of point cloud image.
Optionally, after the vehicle running speed is determined, the vehicle running distance in the sampling period may be determined according to the acquisition frequency of the laser radar based on the vehicle running speed, and the point cloud position coordinates of the vehicle in the vehicle running direction corresponding to each frame of point cloud data in the point cloud data set, that is, the vehicle position information, may be determined according to the position information where the laser radar is located and the vehicle running distance in the sampling period.
It should be noted that, since the installation position of the laser radar is fixed, the laser radar cannot acquire the position coordinates in the vehicle traveling direction, and therefore, it is necessary to determine the point cloud position coordinates in the vehicle traveling direction based on the traveling speed of the vehicle.
And S204, constructing a three-dimensional point cloud model according to the vehicle position information and the point cloud data set, and extracting vehicle point cloud from the three-dimensional point cloud model.
Optionally, because the laser radar is stationary in the process of collecting the point cloud, the position coordinates of each point cloud of the point cloud data set along the vehicle traveling direction are fixed, at this time, the position coordinates of each frame of point cloud data in the vehicle traveling direction may be updated based on the vehicle position information corresponding to each frame of point cloud data determined in S203, and a three-dimensional point cloud model is constructed based on the three-dimensional position coordinates and the light intensity value of each frame of point cloud data after updating.
It should be noted that a more accurate three-dimensional point cloud model can be obtained by superimposing the position information of the vehicle on the basis of the position coordinates included in the collected point cloud data set to construct the three-dimensional point cloud model.
Optionally, after the three-dimensional point cloud model is constructed, a detailed description is given in S102 to the specific process of extracting the vehicle point cloud from the three-dimensional point cloud model, and is not described here again.
And S205, processing the vehicle point cloud and the vehicle image set, and determining a preliminary vehicle type, a vehicle body length and axle information of the vehicle.
And S206, determining the final vehicle type of the vehicle according to the preliminary vehicle type, the vehicle body length and the axle information.
In the vehicle running process, after a vehicle image set and a point cloud data set are obtained, the vehicle running speed is further determined according to the vehicle image set and the point cloud data set; determining vehicle position information corresponding to each frame of point cloud data in the point cloud data set according to the vehicle running speed and the laser radar parameters; and constructing a three-dimensional point cloud model according to the vehicle position information and the point cloud data set. Then extracting vehicle point cloud from the three-dimensional point cloud model; processing the vehicle point cloud and the vehicle image set, and determining a preliminary vehicle type, a vehicle body length and axle information of the vehicle; and finally determining the final vehicle type of the vehicle. The vehicle position information is determined through the vehicle image set and the point cloud data set, and the acquired vehicle point cloud can be more accurate based on the mode of establishing the three-dimensional point cloud model through the vehicle position information and the point cloud data set, so that the final vehicle type can be accurately and reliably identified.
EXAMPLE III
Fig. 3 is a flowchart of a vehicle type recognition method according to a third embodiment of the present invention, and in this embodiment, a detailed explanation is further performed on "processing a vehicle point cloud through a first neural network model to determine a vehicle body length, a number of axles, and a number of single-ended tires of an axle" on the basis of the third embodiment, as shown in fig. 3, the vehicle type recognition method according to this embodiment specifically includes:
s301, in the vehicle running process, a vehicle image set collected by a camera device and a point cloud data set containing a vehicle collected by a laser radar are obtained.
S302, constructing a three-dimensional point cloud model according to the vehicle image set and the point cloud data set, and extracting vehicle point cloud from the three-dimensional point cloud model.
And S303, projecting the vehicle point cloud to a vertical plane to obtain a vehicle body point cloud image.
Wherein the vertical plane is a plane where a door side of the vehicle is located; the vehicle body point cloud image refers to an image of a vehicle body on the side of the vehicle (i.e., a vehicle body on the door side of the vehicle).
Optionally, the three-dimensional point cloud data may be subjected to dimension reduction processing by using a dimension reduction algorithm, that is, projected onto a plane where the door side of the vehicle is located, so as to obtain a door side vehicle body image (i.e., a vehicle body point cloud image). Optionally, before the vehicle point cloud is projected from the three-dimensional plane to the vertical plane, filtering may be performed on the vehicle point cloud, and the vehicle point cloud after filtering is projected to the vertical plane to obtain a vehicle body point cloud image. It should be noted that, by performing filtering before projecting the vehicle point cloud onto the vertical plane, noise in the vehicle point cloud data can be removed, which is convenient for subsequently extracting a more accurate vehicle contour.
S304, extracting the vehicle contour from the vehicle body point cloud image.
The vehicle contour refers to an outer shape contour of a door-side vehicle body of the vehicle.
Optionally, after obtaining the two-dimensional vehicle body point cloud image, the point cloud image may be input into a target detection model trained based on a target detection algorithm (e.g., YOLOV5 algorithm), and the vehicle contour in the vehicle body point cloud image may be extracted by the target detection model. The vehicle contour can also be extracted from the point cloud image using a related image processing algorithm, such as an edge detection algorithm.
S305, processing the vehicle contour through the first neural network model, and determining the vehicle body length and the axle information of the vehicle.
Optionally, the vehicle contour extracted from the point cloud image may be directly input into the first neural network model, and after the vehicle contour is analyzed and identified by the first neural network model, the body length and the axle information of the vehicle may be determined.
Optionally, in order to improve the accuracy of determining the length of the vehicle body and the axle information, in this embodiment, before the vehicle contour is input to the first neural network, data enhancement processing may be performed on the extracted vehicle contour according to the reflection intensity information in the point cloud data of the vehicle, and the vehicle contour after data enhancement is input to the first neural network model, so that more accurate information on the length of the vehicle body and the axle of the vehicle is obtained.
And S306, processing the vehicle image set through the second neural network model, and determining a preliminary vehicle type of the vehicle.
And S307, determining the final vehicle type of the vehicle according to the preliminary vehicle type, the vehicle body length and the axle information.
After vehicle point cloud is extracted from the three-dimensional point cloud model, the vehicle point cloud is projected to a vertical plane to obtain a vehicle body point cloud image, and a vehicle outline is extracted from the vehicle body point cloud image; processing the vehicle contour through a first neural network model to determine the vehicle body length and axle information of the vehicle; and processing the vehicle image set through a second neural network model, determining a preliminary vehicle type of the vehicle, and determining a final vehicle type of the vehicle according to the preliminary vehicle type, the vehicle body length and the axle information. According to the scheme, the three-dimensional vehicle point cloud is converted into the two-dimensional vehicle point cloud image, and then the vehicle body length and the axle information are determined, so that the determination process of the vehicle body length and the axle information is greatly simplified, and the determination efficiency is improved.
Example four
Fig. 4 is a flowchart of a vehicle type recognition method according to a fourth embodiment of the present invention, and in this embodiment, based on the foregoing embodiment, a detailed explanation is further performed on "processing a vehicle image set through a second neural network model to determine a preliminary vehicle type of a vehicle", and as shown in fig. 4, the vehicle type recognition method according to this embodiment specifically includes:
s401, in the vehicle running process, a vehicle image set collected by a camera device and a point cloud data set containing a vehicle collected by a laser radar are obtained.
S402, building a three-dimensional point cloud model according to the vehicle image set and the point cloud data set, and extracting vehicle point cloud from the three-dimensional point cloud model.
And S403, processing the vehicle point cloud through the first neural network model, and determining the length of the vehicle body and the axle information of the vehicle.
S404, determining an initial vehicle image from the vehicle image set.
The initial vehicle image is a vehicle image selected according to a certain screening rule. Specifically, the laser radar triggers an image collected by a camera device when the head of the vehicle is collected.
Optionally, get into laser radar collection within range in the car, after laser radar gathers the locomotive point cloud in the car promptly, can send the acquisition signal to camera device to trigger camera device and gather an image, this embodiment can be as initial vehicle image the vehicle image that the camera gathered after receiving the acquisition signal that laser radar sent.
And S405, performing distortion removal processing on the initial vehicle image to obtain a target vehicle image.
The distortion removal processing is an operation of preprocessing a portion of an image which may be distorted to remove an influence of distortion.
Optionally, the camera device is affected by a shooting angle and a lens process, and a vehicle body area in a vehicle image acquired by the camera device usually has radial distortion and tangential distortion. For example, the pixels of the head of a vehicle in the initial vehicle image tend to be relatively large. To improve the accuracy of the preliminary vehicle type determination of a vehicle. After the preliminary vehicle image is determined, the distortion removal processing needs to be performed on the preliminary vehicle image to obtain the target vehicle image. Specifically, the original image may be subjected to a distortion removal process by a distortion removal model, and may also be subjected to a distortion removal process by an image processing algorithm (such as perspective transformation).
And S406, processing the target vehicle image through the second neural network model, and determining the primary vehicle type of the vehicle.
Optionally, the target vehicle image subjected to the distortion removal processing may be input into a trained second neural network model, and vehicle type recognition may be performed on the target vehicle image, so as to output a preliminary vehicle type of a vehicle corresponding to the target vehicle image.
And S407, determining the final vehicle type of the vehicle according to the preliminary vehicle type, the vehicle body length and the axle information.
The method comprises the steps of obtaining a vehicle image set and a point cloud data set, processing vehicle point cloud through a first neural network model, further determining an initial vehicle image from the vehicle image set after determining the length of a vehicle body and axle information of the vehicle, carrying out distortion removal processing on the initial vehicle image to obtain a target vehicle image, processing the target vehicle image through a second neural network model to determine a primary vehicle type of the vehicle, and finally determining a final vehicle type of the vehicle according to the primary vehicle type, the length of the vehicle body and the axle information. According to the scheme, based on the selected initial vehicle image, the initial vehicle type of the vehicle is identified by inputting the second neural network after distortion removal processing, so that the identification efficiency and accuracy of the initial vehicle type are greatly improved, and the guarantee is provided for subsequent accurate determination of the final vehicle type.
EXAMPLE five
Fig. 5 is a system architecture diagram of vehicle type identification according to a fifth embodiment of the present invention, and this embodiment provides a preferred example that a vehicle type identification system interacts with a toll booth system to identify and verify a vehicle type of a vehicle based on the above embodiments.
As shown in fig. 5, the vehicle type recognition system may include a data acquisition module, an algorithm module, a graphic display, and a background management module. The toll booth system may include a data management module and a print reporting module. And the vehicle type identification system sends the acquired final vehicle type data to a toll station system through a data interface.
Specifically, in the process that the vehicle drives to the toll station, the data acquisition module is used for acquiring a vehicle image set acquired by the camera device and a point cloud data set including the vehicle acquired by the laser radar and sending the point cloud data set to the algorithm module;
the algorithm module is used for executing any vehicle type identification method provided by the embodiment to obtain the final vehicle type.
And the image display module is used for displaying the obtained final vehicle type data, displaying the processed target vehicle image, and sending the final vehicle type data and the target vehicle image to the background management.
And the background management module is used for storing the final vehicle type data and the target vehicle image locally and then sending the final vehicle type data and the target vehicle image to the data management module of the toll station system through a data interface.
The data management module further compares the vehicle type information reported by the vehicle through the OBU with the data received from the data interface, namely, vehicle data comparison, verifies the vehicle type information reported by the vehicle, and determines the comparison result to send to the printing report module.
And the printing report module is used for sorting the comparison results, generating an analysis report of whether the vehicle has cheating violations or not and printing the analysis report.
Optionally, the camera device and the laser radar may be disposed on one side of a road, and in the vehicle driving process, the vehicle may first pass through the laser radar, then pass through the camera device, and finally pass through an entrance of a toll station.
It should be noted that the scheme provided by this embodiment can detect whether there is an illegal behavior such as changing a short distance OBU or cheating a license plate of a long-distance bus. Specifically, as the charging standard on the highway is distinguished by vehicle types, the vehicle type information is written in the OBU, and a lane of the electronic charging system without stopping is unattended, some operation companies cut the electronic tag of a small vehicle together with glass and install the electronic tag on a medium-sized bus or a large-sized bus, and replace license plates at the same time, the cost of overhigh speed is reduced, namely the cheating action of changing the long-distance vehicle into the short-distance OBU is realized; in addition, some long-distance vehicles purchase short-distance OBUs and license plates from lawless persons in a service area and pass through ETC lanes to reduce passing expenses, namely license plate cheating behaviors.
EXAMPLE six
Fig. 6 is a schematic structural diagram of a vehicle type recognition apparatus in a sixth embodiment of the present invention, and the vehicle type recognition apparatus provided in the sixth embodiment of the present invention is capable of executing a vehicle type recognition method provided in any embodiment of the present invention, and has functional modules and beneficial effects corresponding to the execution method. The vehicle type recognition apparatus may include a data acquisition module 601, a model construction module 602, a first determination module 603, and a second determination module 604.
The data acquisition module 601 is used for acquiring a vehicle image set acquired by a camera device and a point cloud data set including a vehicle acquired by a laser radar in the vehicle running process; the camera device and the laser radar are deployed on one side of a road;
a model construction module 602, configured to construct a three-dimensional point cloud model according to the vehicle image set and the point cloud data set, and extract a vehicle point cloud from the three-dimensional point cloud model;
a first determining module 603, configured to process the vehicle point cloud and the vehicle image set, and determine a preliminary vehicle type, a vehicle body length, and axle information of the vehicle; wherein the axle information includes the number of axles and the axle type;
a second determining module 604, configured to determine a final vehicle type of the vehicle according to the preliminary vehicle type, the vehicle body length, and the axle information.
In the process of vehicle running, a vehicle image set acquired by a camera device and a point cloud data set including a vehicle acquired by a laser radar are acquired; constructing a three-dimensional point cloud model according to the vehicle image set and the point cloud data set, and extracting vehicle point cloud from the three-dimensional point cloud model; processing the vehicle point cloud and the vehicle image set, and determining a preliminary vehicle type, a vehicle body length and axle information comprising the number of axles and the axle type of the vehicle; the final vehicle type of the vehicle is determined according to the preliminary vehicle type, the vehicle body length and the axle information, and vehicle type recognition is performed by introducing the vehicle body length, the axle number and the axle type, so that the accuracy and the reliability of a vehicle type recognition result can be improved.
Further, model building module 602 may include:
the speed determining unit is used for determining the driving speed of the vehicle according to the vehicle image set and the point cloud data set;
the position determining unit is used for determining vehicle position information corresponding to each frame of point cloud data in the point cloud data set according to the vehicle running speed and the laser radar parameters;
and the point cloud model building unit is used for building a three-dimensional point cloud model according to the vehicle position information and the point cloud data set.
Further, the model building module 602 is specifically configured to:
and removing ground point clouds from the three-dimensional point cloud model, and clustering the residual point clouds to obtain vehicle point clouds.
Further, the first determining module 603 may include:
the first data determining unit is used for processing the vehicle point cloud through a first neural network model and determining the length of the vehicle body and the axle information of the vehicle;
and the second data determination unit is used for processing the vehicle image set through a second neural network model to determine a preliminary vehicle type of the vehicle.
Further, the first data determination unit may include:
the image acquisition subunit is used for projecting the vehicle point cloud to a vertical plane to obtain a vehicle body point cloud image; wherein the vertical plane is a plane where a door side of the vehicle is located;
the extraction subunit is used for extracting the vehicle outline from the vehicle body point cloud image;
and the data determining subunit is used for processing the vehicle contour through a first neural network model and determining the vehicle body length and the axle information of the vehicle.
Further, the second data determination unit may include:
an image determination subunit for determining an initial vehicle image from the set of vehicle images; the initial vehicle image is an image acquired by triggering a camera device when the laser radar acquires the head of the vehicle;
the target image acquisition subunit is used for carrying out distortion removal processing on the initial vehicle image to obtain a target vehicle image;
and the preliminary vehicle type determining subunit is used for processing the target vehicle image through a second neural network model to determine a preliminary vehicle type of the vehicle.
Further, the above apparatus is further configured to:
and transmitting the final vehicle type of the vehicle to a vehicle management system so that the vehicle management system verifies the vehicle type information reported by the vehicle based on the final vehicle type of the vehicle.
EXAMPLE seven
Fig. 7 is a schematic structural diagram of an electronic device in a seventh embodiment of the present invention, and fig. 7 shows a block diagram of an exemplary device suitable for implementing an embodiment of the present invention. The device shown in fig. 7 is only an example and should not bring any limitation to the function and the scope of use of the embodiments of the present invention.
As shown in FIG. 7, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory (cache 32). The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 7, and commonly referred to as a "hard drive"). Although not shown in FIG. 7, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. System memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in system memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of embodiments described herein.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by executing programs stored in the system memory 28, for example, implementing a vehicle type recognition method provided by an embodiment of the present invention.
Example eight
The eighth embodiment of the present invention further provides a computer-readable storage medium, on which a computer program (or referred to as computer-executable instructions) is stored, where the computer program is used for executing the vehicle type identification method provided by the embodiment of the present invention when the computer program is executed by a processor.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the embodiments of the present invention have been described in more detail through the above embodiments, the embodiments of the present invention are not limited to the above embodiments, and many other equivalent embodiments may be included without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A vehicle type recognition method is characterized by comprising the following steps:
in the running process of a vehicle, acquiring a vehicle image set acquired by a camera device and a point cloud data set including the vehicle acquired by a laser radar; the camera device and the laser radar are deployed on one side of a road;
constructing a three-dimensional point cloud model according to the vehicle image set and the point cloud data set, and extracting vehicle point cloud from the three-dimensional point cloud model;
processing the vehicle point cloud and the vehicle image set, and determining a preliminary vehicle type, a vehicle body length and axle information of the vehicle; wherein the axle information includes the number of axles and the axle type;
and determining the final vehicle type of the vehicle according to the preliminary vehicle type, the vehicle body length and the axle information.
2. The method of claim 1, wherein constructing a three-dimensional point cloud model from the vehicle image set and the point cloud data set comprises:
determining the vehicle running speed according to the vehicle image set and the point cloud data set;
determining vehicle position information corresponding to each frame of point cloud data in the point cloud data set according to the vehicle running speed and the laser radar parameters;
and constructing a three-dimensional point cloud model according to the vehicle position information and the point cloud data set.
3. The method of claim 1, wherein said extracting a vehicle point cloud from said three-dimensional point cloud model comprises:
and removing ground point clouds from the three-dimensional point cloud model, and clustering the residual point clouds to obtain vehicle point clouds.
4. The method of claim 1, wherein the processing the vehicle point cloud and the vehicle image set to determine preliminary vehicle type, body length, and axle information for the vehicle comprises:
processing the vehicle point cloud through a first neural network model to determine the length of the vehicle body and the axle information of the vehicle;
and processing the vehicle image set through a second neural network model to determine a preliminary vehicle type of the vehicle.
5. The method of claim 4, wherein the processing the vehicle point cloud via a first neural network model to determine body length and axle information for the vehicle comprises:
projecting the vehicle point cloud to a vertical plane to obtain a vehicle body point cloud image; wherein the vertical plane is a plane where a door side of the vehicle is located;
extracting a vehicle contour from the vehicle body point cloud image;
and processing the vehicle contour through a first neural network model to determine the vehicle body length and the axle information of the vehicle.
6. The method of claim 4, wherein processing the set of vehicle images through a second neural network model to determine a preliminary vehicle type for the vehicle comprises:
determining an initial vehicle image from the set of vehicle images; the initial vehicle image is an image acquired by triggering a camera device when the laser radar acquires the head of the vehicle;
carrying out distortion removal processing on the initial vehicle image to obtain a target vehicle image;
and processing the target vehicle image through a second neural network model to determine a preliminary vehicle type of the vehicle.
7. The method according to any one of claims 1-6, further comprising:
and transmitting the final vehicle type of the vehicle to a vehicle management system so that the vehicle management system verifies the vehicle type information reported by the vehicle based on the final vehicle type of the vehicle.
8. A vehicle type recognition apparatus characterized by comprising:
the data acquisition module is used for acquiring a vehicle image set acquired by the camera device and a point cloud data set including the vehicle acquired by the laser radar in the vehicle driving process; the camera device and the laser radar are deployed on one side of a road;
the model building module is used for building a three-dimensional point cloud model according to the vehicle image set and the point cloud data set and extracting vehicle point cloud from the three-dimensional point cloud model;
the first determining module is used for processing the vehicle point cloud and the vehicle image set and determining a preliminary vehicle type, a vehicle body length and axle information of the vehicle; wherein the axle information includes the number of axles and the axle type;
and the second determining module is used for determining the final vehicle type of the vehicle according to the preliminary vehicle type, the vehicle body length and the axle information.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the vehicle type recognition method of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a processor, implements the vehicle type recognition method according to any one of claims 1 to 7.
CN202111596011.7A 2021-12-24 2021-12-24 Vehicle type recognition method and device, electronic equipment and storage medium Pending CN114399744A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111596011.7A CN114399744A (en) 2021-12-24 2021-12-24 Vehicle type recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111596011.7A CN114399744A (en) 2021-12-24 2021-12-24 Vehicle type recognition method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114399744A true CN114399744A (en) 2022-04-26

Family

ID=81226696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111596011.7A Pending CN114399744A (en) 2021-12-24 2021-12-24 Vehicle type recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114399744A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114812435A (en) * 2022-04-29 2022-07-29 苏州思卡信息系统有限公司 Vehicle three-dimensional point cloud data filtering method
CN116091437A (en) * 2022-12-30 2023-05-09 苏州思卡信息系统有限公司 Axle number detection method based on 3D point cloud
CN116307743A (en) * 2023-05-23 2023-06-23 浙江安邦护卫科技服务有限公司 Escort safety early warning method, system, equipment and medium based on data processing
CN116342676A (en) * 2023-05-31 2023-06-27 广州市杜格科技有限公司 Method, equipment and storage medium for measuring outline dimension of traffic holographic vehicle
CN117572451A (en) * 2024-01-11 2024-02-20 广州市杜格科技有限公司 Traffic information acquisition method, equipment and medium based on multi-line laser radar

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114812435A (en) * 2022-04-29 2022-07-29 苏州思卡信息系统有限公司 Vehicle three-dimensional point cloud data filtering method
CN114812435B (en) * 2022-04-29 2023-10-20 苏州思卡信息系统有限公司 Vehicle three-dimensional point cloud data filtering method
CN116091437A (en) * 2022-12-30 2023-05-09 苏州思卡信息系统有限公司 Axle number detection method based on 3D point cloud
CN116091437B (en) * 2022-12-30 2024-02-02 苏州思卡信息系统有限公司 Axle number detection method based on 3D point cloud
CN116307743A (en) * 2023-05-23 2023-06-23 浙江安邦护卫科技服务有限公司 Escort safety early warning method, system, equipment and medium based on data processing
CN116307743B (en) * 2023-05-23 2023-08-04 浙江安邦护卫科技服务有限公司 Escort safety early warning method, system, equipment and medium based on data processing
CN116342676A (en) * 2023-05-31 2023-06-27 广州市杜格科技有限公司 Method, equipment and storage medium for measuring outline dimension of traffic holographic vehicle
CN117572451A (en) * 2024-01-11 2024-02-20 广州市杜格科技有限公司 Traffic information acquisition method, equipment and medium based on multi-line laser radar
CN117572451B (en) * 2024-01-11 2024-04-05 广州市杜格科技有限公司 Traffic information acquisition method, equipment and medium based on multi-line laser radar

Similar Documents

Publication Publication Date Title
CN114399744A (en) Vehicle type recognition method and device, electronic equipment and storage medium
Zhu et al. Pavement distress detection using convolutional neural networks with images captured via UAV
Liu et al. A review of applications of visual inspection technology based on image processing in the railway industry
US8447112B2 (en) Method for automatic license plate recognition using adaptive feature set
CN109871799B (en) Method for detecting mobile phone playing behavior of driver based on deep learning
CN109871728B (en) Vehicle type recognition method and device
US9704201B2 (en) Method and system for detecting uninsured motor vehicles
CN106980113A (en) Article detection device and object detecting method
EP4239614A2 (en) Systems and methods for image-based location determination and parking monitoring
CN105448105A (en) Patrol police vehicle-based monitoring system
CN112613424A (en) Rail obstacle detection method, rail obstacle detection device, electronic apparatus, and storage medium
CN112381014A (en) Illegal parking vehicle detection and management method and system based on urban road
CN105608903A (en) Traffic violation detection method and system
CN111382735A (en) Night vehicle detection method, device, equipment and storage medium
CN113869275A (en) Vehicle object detection system that throws based on remove edge calculation
Kumar et al. E-challan automation for RTO using OCR
CN113505638A (en) Traffic flow monitoring method, traffic flow monitoring device and computer-readable storage medium
Al Nasim et al. An automated approach for the recognition of bengali license plates
CN111354191B (en) Lane driving condition determining method, device and equipment and storage medium
Song et al. Modeling and optimization of semantic segmentation for track bed foreign object based on attention mechanism
CN112861701B (en) Illegal parking identification method, device, electronic equipment and computer readable medium
CN114581863A (en) Vehicle dangerous state identification method and system
CN116206454A (en) Method, system, medium and electronic device for identifying vehicle
CN111383458B (en) Vehicle violation detection method, device, equipment and storage medium
Aiyelabegan et al. Proposed automatic number plate recognition system using machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination