CN117953460A - Vehicle wheel axle identification method and device based on deep learning - Google Patents

Vehicle wheel axle identification method and device based on deep learning Download PDF

Info

Publication number
CN117953460A
CN117953460A CN202410346452.9A CN202410346452A CN117953460A CN 117953460 A CN117953460 A CN 117953460A CN 202410346452 A CN202410346452 A CN 202410346452A CN 117953460 A CN117953460 A CN 117953460A
Authority
CN
China
Prior art keywords
axle
module
data
wheel
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410346452.9A
Other languages
Chinese (zh)
Inventor
胡政
黄国富
胡其锋
唐振中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Zonjli High Tech Co ltd
Original Assignee
Jiangxi Zonjli High Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Zonjli High Tech Co ltd filed Critical Jiangxi Zonjli High Tech Co ltd
Priority to CN202410346452.9A priority Critical patent/CN117953460A/en
Publication of CN117953460A publication Critical patent/CN117953460A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The application provides a vehicle wheel axle identification method and device based on deep learning, and relates to the technical field of electric digital data processing, wherein the method comprises the following steps: inputting a sequence of images of a target vehicle within a first period of time from the imaging module into a first recognition model, determining a sequence of axle region images from an output of the first recognition model; inputting the axle region image sequence into a second recognition model, and determining the axle type data of each axle region image according to the output of the second recognition model; determining a first number of first axles: screening the first number of first wheel shafts to determine a second number of second wheel shafts; screening the second number of second wheel axles to determine a third number of target wheel axles meeting the standard wheel axle positions and the standard wheel axle sizes; axle data of the target vehicle is determined. The wheel axle of the vehicle with various special conditions can be accurately identified, and the accuracy and the robustness of the identification of the wheel axle of the vehicle are greatly improved.

Description

Vehicle wheel axle identification method and device based on deep learning
Technical Field
The application relates to the technical field of electric digital data processing, in particular to a vehicle wheel axle identification method and device based on deep learning.
Background
At present, the overload transportation of trucks can seriously damage the pavement of highways and bridges, shorten the service life of the highways and bridges, and easily cause traffic accidents to endanger the life safety and property safety of people. In the case of trucks, because of the need for loading cargo, the rear wheels that are required to carry cargo are often two tires side by side as one wheel, which can increase the load limit of the compartment, and such tires are often referred to as double tires; conversely, wheels that do not require a load require only one wheel to drive the truck, such truck wheels being referred to as a single tire. After the wheels of the truck are classified, the weight limit of the truck can be known by means of arrangement and combination of a single tire and a double tire and comparison with the national standard for identifying overload of highway freight vehicles, and then the weight limit of the truck can be judged by comparison with the actual weight of the truck.
At present, the core of the judgment of the overrun of the truck by adopting a machine vision method is to count and identify the wheels of the truck through cooperation of a computer and a camera, and the identification of the wheels is actually the identification of the axle type of the axle due to the similarity of tires. However, the existing method has lower detection accuracy for some special-case vehicles, such as the phenomenon that the same vehicle is repeatedly detected when a vehicle in a reversing situation occurs, such as the phenomenon that a truck carrying other vehicles above the same vehicle is erroneously detected when the truck is not the target truck.
Disclosure of Invention
In view of the above, the application provides a vehicle axle identification method and device based on deep learning, which can accurately identify the axles of vehicles with various special conditions, thereby greatly improving the accuracy and the robustness of vehicle axle identification.
In a first aspect, an embodiment of the present application provides a vehicle axle identification method based on deep learning, which is applied to a processing module in an axle identification system, where the axle identification system further includes an imaging module and a passing detection module; the method comprises the following steps:
Inputting an image sequence of a target vehicle in a first period from the imaging module into a first recognition model, and determining an axle region image sequence according to the output of the first recognition model, wherein the starting time of the first period is the time when a first instruction sent by the passing detection module is received, and the ending time of the first period is the time when a second instruction sent by the passing detection module is received;
Inputting the axle region image sequence into a second recognition model, and determining axle type data of each axle region image according to the output of the second recognition model, wherein the axle type data comprises a single tire or a double tire;
Performing motion prediction analysis on the sequence of wheel axis region images to determine a first number of first wheel axes:
Determining a movement path of each first wheel axle in the first period, and screening the first number of first wheel axles to determine a second number of second wheel axles;
Determining a standard axle position and a standard axle size according to the position and the size of a first frame of axle region image in the axle region image sequence, and screening the second number of second axles to determine a third number of target axles conforming to the standard axle position and the standard axle size;
Axle data for the target vehicle is determined, the axle data including axle type data for each target axle.
In a second aspect, an embodiment of the present application provides a vehicle axle identification device based on deep learning, which is applied to a processing module in an axle identification system, where the axle identification system further includes an imaging module and a passing detection module; the device comprises:
The first recognition unit is used for inputting an image sequence of the target vehicle in a first period from the imaging module into a first recognition model, determining an axle area image sequence according to the output of the first recognition model, wherein the starting moment of the first period is the moment of receiving a first instruction sent by the vehicle passing detection module, and the ending moment of the first period is the moment of receiving a second instruction sent by the vehicle passing detection module;
the second recognition unit is used for inputting the axle region image sequence into a second recognition model, and determining the axle type data of each axle region image according to the output of the second recognition model, wherein the axle type data comprises a single tire or a double tire;
a counting unit, configured to perform motion prediction analysis on the sequence of axle region images to determine a first number of first axles:
A first screening unit, configured to determine a movement path of each first axle in the first period, and screen the first number of first axles to determine a second number of second axles;
A second screening unit, configured to determine a standard axle position and a standard axle size according to a position and a size of a first frame axle area image in the axle area image sequence, and screen the second number of second axles to determine a third number of target axles that conform to the standard axle position and the standard axle size;
And the axle determining unit is used for determining axle data of the target vehicle, wherein the axle data comprises axle type data of each target axle.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, the programs including instructions for performing steps in any of the methods of the first aspect of the embodiments of the present application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for electronic data exchange, wherein the computer program causes a computer to perform part or all of the steps as described in any of the methods of the first aspect of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps described in any of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
Therefore, the vehicle wheel axle identification method and device based on deep learning are applied to the processing module in the wheel axle identification system, and the wheel axle identification system further comprises an imaging module and a vehicle passing detection module; firstly, inputting an image sequence of a target vehicle in a first period from an imaging module into a first recognition model, and determining an axle area image sequence according to the output of the first recognition model, wherein the starting moment of the first period is the moment of receiving a first instruction sent by a passing detection module, and the ending moment of the first period is the moment of receiving a second instruction sent by the passing detection module; then, inputting the axle region image sequence into a second recognition model, and determining axle type data of each axle region image according to the output of the second recognition model, wherein the axle type data comprises a single tire or a double tire; next, a motion prediction analysis is performed on the sequence of wheel axis region images to determine a first number of first wheel axes: then, determining a movement path of each first wheel axle in the first period, and screening the first number of first wheel axles to determine a second number of second wheel axles; then, determining a standard axle position and a standard axle size according to the position and the size of a first frame axle region image in the axle region image sequence, and screening the second number of second axles to determine a third number of target axles conforming to the standard axle position and the standard axle size; finally, axle data for the target vehicle is determined, the axle data including axle type data for each target axle. The wheel axle of the vehicle in various special scenes can be accurately identified, and the accuracy and the robustness of the identification of the wheel axle of the vehicle are greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a system architecture diagram of a vehicle axle recognition method based on deep learning according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
Fig. 3 is a schematic flow chart of a vehicle axle recognition method based on deep learning according to an embodiment of the present application;
FIG. 4 is a schematic structural diagram of a first recognition model according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a first spatial frequency module, a third spatial frequency module, and a fifth spatial frequency module according to an embodiment of the present application;
Fig. 6 is a schematic structural diagram of a second spatial frequency module, a fourth spatial frequency module, and a sixth spatial frequency module according to an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a second recognition model according to an embodiment of the present application;
fig. 8 is a functional unit composition block diagram of a vehicle axle identification device based on deep learning according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
It should be understood that the term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In this context, the character "/" indicates that the front and rear associated objects are an "or" relationship. The term "plurality" as used in the embodiments of the present application means two or more.
"At least one" or the like in the embodiments of the present application means any combination of these items, including any combination of single item(s) or plural items(s), meaning one or more, and plural means two or more. For example, at least one (one) of a, b or c may represent the following seven cases: a, b, c, a and b, a and c, b and c, a, b and c. Wherein each of a, b, c may be an element or a set comprising one or more elements.
The "connection" in the embodiment of the present application refers to various connection manners such as direct connection or indirect connection, so as to implement communication between devices, which is not limited in the embodiment of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The following describes related content, concepts, meanings, technical problems, technical schemes, beneficial effects and the like related to the embodiment of the application.
Currently, there are two solutions for identifying the axle of a vehicle, one is through a sensor, which mainly consists of two pressure sensors and one terminal processor. The path of the truck passing through the toll gate is roughly determined, the system is provided with a pressure sensor and an axle sensor on the road section where the truck must pass, the pressure sensor can obtain a pressure signal when wheels of the truck crush the sensor, the axle identifier can obtain an axle signal, and the signal is transmitted to the terminal processor through the communication module. The terminal processor creates a signal collection list for each vehicle during its passage. The terminal processor converts the signals received each time into a signal pulse, and the number of the signal pulses is put into an information collection list after the truck passes completely, so that the number information of the axles is obtained. The wheel axle identifier can obtain different sensing signals through the rolling area difference of a single tire and a double tire of the wheel (the double tire is formed by combining two wheels, the single tire is only one wheel, and the area of the rolling ground of the single tire naturally has great difference), and then the sensing signals are transmitted to the terminal processor. The terminal processor also puts the single and double tyre signals into the information collection list after obtaining the single and double tyre signals, roughly estimates the limit weight of the truck by combining the single and double tyre signals with the counting information, and compares the estimated limit weight with the actual weight of the truck measured by the weight measuring device so as to judge whether the truck is overweight.
There are three problems with this way of identification by the sensor: (1) When the speed of the vehicle is too high, the axle loss problem caused by missed detection occurs in the axle identifier; (2) The sensor is damaged due to frequent rolling of the sensor by the truck, and the maintenance cost of the system is high; (3) When the truck is in a reversing condition, the same wheels roll the pressure sensor for multiple times, so that the condition of false detection is caused. The first and second conditions are caused by the direct contact of the identification system with the wheel.
The other is to identify the wheel axle of the vehicle in a machine vision mode, but the detection accuracy of the method for some special vehicles is low, for example, the phenomenon that the same vehicle is repeatedly detected when the vehicle is in a reversing state, for example, the phenomenon that the vehicle is not detected by mistake when the truck carrying other vehicles above is also caused.
In order to solve the problems, the application provides a vehicle axle identification method and device based on deep learning, which can accurately identify the axles of vehicles with various special conditions, and greatly improve the accuracy and the robustness of vehicle axle identification.
First, referring to fig. 1, a system architecture of a vehicle axle recognition method based on deep learning in an embodiment of the present application will be described, where the axle recognition system 100 includes a processing module 110, an imaging module 120, and a passing detection module 130, where the processing module 110 may be wirelessly connected to the imaging module 120 and the passing detection module 130, respectively, and the processing module 110 may receive an image from the imaging module 120 and perform recognition processing in response to an instruction of the passing detection module 130.
The imaging module 120 may include a camera module, a light source module, etc. for capturing an image, and the imaging module 120 may be disposed parallel to the ground for capturing an image of a wheel region of a vehicle and transmitting to the processing module 110.
The passing detection module 130 may include a grating, an instrument, etc. and is configured to detect whether a vehicle enters the identification area, send an instruction to the processing module 110 to instruct the processing module 110 to start axle identification when detecting that the vehicle enters the identification area, send an instruction to the processing module 110 to instruct the processing module 110 to stop axle identification when detecting that the vehicle leaves the identification area, and initialize the identification result.
The processing module 110 may process the image sequence from the imaging module 120, including determining an axle region of the vehicle, identifying the axle region to determine a type of the axle, determining a preliminary number of wheels through motion prediction analysis, and performing two-wheel screening on the preliminary number of wheels to determine a final number of wheels, so as to obtain accurate axle data of the vehicle.
Therefore, through the system architecture, the axles of the vehicles with various special conditions can be accurately identified, and the accuracy and the robustness of the identification of the axles of the vehicles are greatly improved.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 2, the electronic device 20 includes one or more processors 220, a memory 230, a communication module 240, and one or more programs 231, where the processor 220 is communicatively connected to the memory 230 and the communication module 240 through an internal communication bus.
Wherein the one or more programs 231 are stored in the memory 230 and configured to be executed by the processor 220, the one or more programs 231 comprising instructions for performing any of the steps of the method embodiments described above.
The Processor 220 may be, for example, a central processing unit (Central Processing Unit, CPU), a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application-specific integrated Circuit (ASIC), a field programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, units and circuits described in connection with this disclosure. Processor 220 may also be a combination that performs computing functions, such as including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
Memory 230 may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an erasable programmable ROM (erasable PROM), an electrically erasable programmable EPROM (EEPROM), or a flash memory. The volatile memory may be random access memory (random access memory, RAM) which acts as external cache memory. By way of example, and not limitation, many forms of random access memory (random access memory, RAM) are available, such as static random access memory (STATIC RAM, SRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate synchronous dynamic random access memory (double DATA RATE SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCHLINK DRAM, SLDRAM), and direct memory bus random access memory (direct rambus RAM, DR RAM).
It will be appreciated that the electronic device 20 may include more or fewer structural elements than those described in the above-described block diagrams, including, for example, a power module, physical key, wi-Fi module, speaker, bluetooth module, sensor, display module, etc., without limitation. The electronic device 20 may be any module in the above application scenario, which is not described herein.
After understanding the software and hardware architecture of the embodiment of the present application, a vehicle axle recognition method based on deep learning in the embodiment of the present application is described below with reference to fig. 3, and fig. 3 is a schematic flow chart of the vehicle axle recognition method based on deep learning provided in the embodiment of the present application, which is applied to a processing module in an axle recognition system, where the axle recognition system further includes an imaging module and a passing detection module, and specifically includes the following steps:
Step S301, inputting an image sequence of the target vehicle in a first period from the imaging module into a first recognition model, and determining an axle region image sequence according to the output of the first recognition model.
The starting time of the first period is the time when the first instruction sent by the driving detection module is received, and the ending time of the first period is the time when the second instruction sent by the driving detection module is received.
The first recognition model comprises a first convolution module, a first spatial frequency module, a second spatial frequency module, a third spatial frequency module, a fourth spatial frequency module, a fifth spatial frequency module, a sixth spatial frequency module, a second convolution module, a first upsampling module, a first connection module, a first depth convolution module, a second upsampling module, a second connection module, a second depth convolution module, a first convolution layer, a third depth convolution module, a first superposition layer, a fourth depth convolution module, a second convolution layer, a fifth depth convolution module, a second superposition layer, a sixth depth convolution module and a third convolution layer; the inputting the image sequence of the target vehicle in the first period of time from the imaging module into a first identification model, and determining the image sequence of the wheel axle area according to the output of the first identification model comprises:
And preprocessing each image in the image sequence, inputting the preprocessed image into the first recognition model, and determining the image sequence of the wheel axle area according to the output of the first recognition model to each image.
For ease of understanding, referring to fig. 4, fig. 4 is a schematic structural diagram of a first recognition model according to an embodiment of the present application, where the first recognition model includes a first convolution module, a first spatial frequency module, a second spatial frequency module, a third spatial frequency module, a fourth spatial frequency module, a fifth spatial frequency module, a sixth spatial frequency module, a second convolution module, a first upsampling module, a first connection module, a first depth convolution module, a second upsampling module, a second connection module, a second depth convolution module, a first convolution layer, a third depth convolution module, a first superposition layer, a fourth depth convolution module, a second convolution layer, a fifth depth convolution module, a second superposition layer, a sixth depth convolution module, and a third convolution layer.
The following processing is carried out on each image after preprocessing and inputting the image into the first recognition model:
sequentially passing through the first convolution module, the first space frequency module and the second space frequency module to obtain first intermediate data;
sequentially passing the first intermediate data through the third space frequency module and the fourth space frequency module to obtain second intermediate data;
Sequentially passing the second intermediate data through the fifth spatial frequency module, the sixth spatial frequency module and the second convolution module to obtain third intermediate data;
the first up-sampling data obtained by the third intermediate data through the first up-sampling module and the second intermediate data are connected through the first connecting module and then pass through the first deep convolution module to obtain fourth intermediate data;
Second up-sampling data obtained by the fourth intermediate data through the second up-sampling module is connected with the first intermediate data through the second connecting module and then passes through the second deep convolution module to obtain fifth intermediate data;
The fifth intermediate data is subjected to the first convolution layer to obtain first output data, and is input into a third depth convolution module and then is overlapped with the fourth intermediate data in the first overlapping layer, and the sixth intermediate data is obtained through the fourth depth convolution module;
And superposing the sixth intermediate data with the third intermediate data in the second superposition layer after passing through the fifth depth convolution module, and sequentially passing through the sixth depth convolution module and the third convolution layer to obtain third output data.
The first convolution module comprises a convolution kernel, a normalization layer and a first activation function module, and the second convolution module comprises a convolution kernel, a normalization layer and a second activation function module.
Referring to fig. 5, the first spatial frequency module, the third spatial frequency module and the fifth spatial frequency module have the same structure and all include a third convolution module, two depth convolution modules, a connection module and a channel rearrangement module, and it is to be noted that the structure of the depth convolution module is composed of two connected third convolution modules, and the third convolution module includes a convolution layer, a normalization layer and a third activation function module. The connection relationship is not described here in detail.
Referring to fig. 6, the second spatial frequency module, the fourth spatial frequency module, and the sixth spatial frequency module have the same structure and include a channel segmentation module, a third convolution module, a depth convolution module, a connection module, and a channel rearrangement module.
It should be noted that, the channel segmentation divides the input whole feature map into 2 groups, but such grouping does not increase the number of groups during convolution like grouping convolution, wherein one group is not subjected to convolution, the number of channels of input and output can be kept consistent, the memory access amount can be minimized, and it can be understood that global information can be more effectively utilized by the space frequency module.
It should be noted that, the first recognition model in the embodiment of the present application does not need an attention layer and multiple segmentation operations, so that the model is easier to deploy, the model is lighter and faster in processing speed, for example, the processing time for processing an image input 320×320 is about 0.02 s. The frame rate of the image input is 25 frames, so that the image algorithm has a processing time of 0.04s for each frame, and a delay condition of processing finally occurs in a waiting sequence because the algorithm processing is too slow and the image flow backlog occurs after exceeding 0.04 s. The processing time of the network is 0.02s, so that a time margin of 0.02s is left for post-processing such as secondary discrimination confirmation, predictive tracking and the like.
It should be noted that, the third output data is the largest feature map, the second output data is the medium feature map, and the first output data is the smallest feature map, so that accuracy of detecting the axle target can be improved, and accordingly, accurate positions and sizes of the wheel axle external rectangles can be obtained, and accurate single-tire and double-tire discrimination can be performed by inputting the axle area image sequence into the second recognition model.
Step S302, inputting the axle area image sequence into a second recognition model, and determining the axle type data of each axle area image according to the output of the second recognition model.
Wherein the axis data comprises a single tire or a double tire; the second recognition model comprises an initial convolution module, a complete pooling layer, a residual network, an average pooling layer and an output layer; the step of inputting the axle region image sequence into a second recognition model, and determining the axle type data of each axle region image according to the output of the second recognition model comprises the following steps:
And sequentially passing each axle region image through the initial convolution module, the complete pooling layer, the residual error network, the average pooling layer and the output layer to obtain axle data of each axle region image.
Wherein the residual network may consist of a plurality of residual blocks, if the desired base map is set as a function H (x), fitting the stacked non-linear layers to the map F (x) =h (x) -x, the original map becomes H (x) =f (x) +x, and in extreme cases it is easier to zero the residual than fitting the identity map through a stack of non-linear layers assuming the identity map is optimal. A fast connection is added, i.e. an identity mapping is simply performed, after which its output is added to the output of the stack of layers. The connection mode does not increase additional parameters and the computational complexity. The adoption of the extremely deep residual error network is easy to optimize, and meanwhile, the obtained benefits are more accurate while the depth level can be deepened. The residual network in the embodiment of the present application may adopt a Resnet network of 34 layers or a Resnet network of 18 layers, which is not particularly limited herein.
The structure of the second recognition model is shown in fig. 7, and will not be described here.
It can be seen that inputting the sequence of axle region images into a second recognition model, determining the axle type data of each axle region image from the output of the second recognition model can determine an accurate wheel axle type.
Step S303, performing motion prediction analysis on the axle region image sequence to determine a first number of first axles.
Wherein the following processing may be performed on each successive two-frame axle region image to determine the first number of first axles: determining first motion data of an axle corresponding to a first axle region image, wherein the first motion data comprises first position data, first acceleration data and first speed data; determining predicted position data from the first motion data; determining second motion data of the wheel axle corresponding to the second wheel axle area image, wherein the second motion data comprises second position data, second acceleration data and second speed data; if the matching degree of the predicted position data and the second position data is higher than or equal to a preset matching threshold value, determining that the wheel axle corresponding to the first wheel axle area image and the wheel axle corresponding to the second wheel axle area image are the same wheel axle; and if the matching degree of the predicted position data and the second position data is lower than the preset matching threshold value, determining that the wheel axle corresponding to the first wheel axle area image and the wheel axle corresponding to the second wheel axle area image are different wheel axles.
Specifically, a possible position of a next frame of the target may be predicted based on a prediction tracking algorithm (PREDICTIVE TRACKING algorithm based on target motion information, hereinafter referred to simply as PTTM) of the target motion information, and the next frame is compared with the prediction information of the previous frame by the actual detection position information to determine whether it is a detection algorithm of the same target. The principle of the algorithm is that the possible position of an object in the next frame is predicted through the speed, the acceleration and the position of the current frame of the target, and then the object is detected in the next frame and then matched with the prediction in the previous frame to determine whether the object is the same target. Thereby eliminating duplicate axles and obtaining the preliminary number of axles for the vehicle.
It can be seen that the motion prediction analysis of the sequence of axle region images to determine the first number of first axles can determine a preliminary number of axles, which provides a basis for subsequent screening.
Step S304, determining a movement path of each first axle in the first period, and screening the first number of first axles to determine a second number of second axles.
Wherein the movement path of each first axle in the first period of time can be determined, and the number of movement paths in the first direction and the number of movement paths in the second direction are determined; if the number of the motion paths in the first direction is greater than half of the first number, screening out the first wheel shafts corresponding to the motion paths in the first direction as the second wheel shafts, wherein the second number is the number of the motion paths in the first direction; and if the number of the motion paths in the second direction is greater than half of the first number, screening out the first wheel shafts corresponding to the motion paths in the second direction as the second wheel shafts, wherein the second number is the number of the motion paths in the second direction. It is understood that the second number is less than or equal to the first number.
For example, the complete driving path of each wheel axle of the truck, which is generated in the camera through the previous tracking result, is analyzed first, and if the motion path of the corresponding false detection wheel axle is different from the motion path of the wheel axle of the normal passing truck when the vehicle is in the reversing action, the false detection wheel due to reversing can be screened out by utilizing the information difference. Thus eliminating the false detection of the number of the axles caused by reversing. And the accuracy of wheel shaft identification is improved.
Step S305, determining a standard axle position and a standard axle size according to the position and the size of the first frame axle region image in the axle region image sequence, and screening the second number of second axles to determine a third number of target axles meeting the standard axle position and the standard axle size.
Wherein the first axle area image determined after receiving the first instruction may be determined as a standard axle image; determining the standard axle position and the standard axle size according to the height and the size of the standard axle image; matching the axle position of each second axle with the standard axle position to obtain a first matching degree, and matching the axle size of each second axle with the standard axle size to obtain a second matching degree; and determining a second wheel axle with the first matching degree larger than a first matching degree threshold value and the second matching degree larger than a second matching degree threshold value as a target wheel axle so as to obtain the third number of target wheel axles. The third number is less than or equal to the second number.
For example, since the first wheel captured by the camera just when the truck is in contact with the raster command algorithm is on must be the wheel of the target vehicle, the first detected wheel is set as the "standard wheel" in the system, and since the camera is parallel to the ground in the condition of being set up, we can consider that the size of the target vehicle wheel and the y-axis vertical position do not change much during the travel of the truck. The wheels with non-target vehicle wheels erroneously detected by the target vehicle carrying other vehicles above can be screened out by comparing the size and y-axis vertical position of each wheel in the processed detected wheel list with the size and y-axis vertical position of the standard wheel.
According to the method, the standard wheel axle position and the standard wheel axle size are determined according to the position and the size of the first frame wheel axle area image in the wheel axle area image sequence, the second wheel axles in the second number are screened to determine the third number of target wheel axles conforming to the standard wheel axle position and the standard wheel axle size, the screening method is good for both false detection during reversing and false detection during other vehicles consignment by a truck, the wheel axles of various special cases can be accurately identified, and the accuracy and the robustness of vehicle wheel axle identification are greatly improved.
Step S306 determines axle data of the target vehicle, the axle data including axle type data of each target axle.
Wherein the previously determined axle type may be associated with each final target axle to obtain axle data for the target vehicle.
In one possible embodiment, the vehicle model data of the target vehicle may be determined according to the third number, the axle type data of each target axle, and the arrangement order of each target axle.
For example, see the following table:
In the axial column, "0" represents a single tire and "1" represents a double tire. Other: other axle types are represented, for example, a special multi-axle vehicle or a situation that detection is started by mistake caused by touching a grating by mistake by a person or other objects so that no calculation result is caused. The model corresponding to the model code may refer to a preset mapping table, which is not listed here.
Therefore, through the vehicle axle identification method based on deep learning, the axles of vehicles in various special scenes can be accurately identified, and the accuracy and the robustness of vehicle axle identification are greatly improved.
The foregoing description of the embodiments of the present application has been presented primarily in terms of a method-side implementation. It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional units of the electronic device according to the method example, for example, each functional unit can be divided corresponding to each function, and two or more functions can be integrated in one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
Under the condition that each functional module is divided by adopting corresponding functions, fig. 8 is a functional unit composition block diagram of the vehicle axle recognition device based on deep learning, which is provided by the embodiment of the application, and is applied to a processing module in an axle recognition system, wherein the axle recognition system further comprises an imaging module and a vehicle passing detection module; the vehicle axle identification apparatus 800 includes:
A first recognition unit 810, configured to input an image sequence of a target vehicle in a first period from the imaging module into a first recognition model, determine an axle region image sequence according to an output of the first recognition model, where a start time of the first period is a time when a first instruction sent by the passing detection module is received, and an end time of the first period is a time when a second instruction sent by the passing detection module is received;
A second recognition unit 820 for inputting the axle region image sequence into a second recognition model, and determining axle type data of each axle region image according to the output of the second recognition model, wherein the axle type data comprises a single tire or a double tire;
A counting unit 830, configured to perform motion prediction analysis on the sequence of axle region images to determine a first number of first axles:
A first screening unit 840, configured to determine a movement path of each first axle in the first period, and screen the first number of first axles to determine a second number of second axles;
A second screening unit 850, configured to determine a standard axle position and a standard axle size according to the position and the size of the first frame axle region image in the axle region image sequence, and screen the second number of second axles to determine a third number of target axles that conform to the standard axle position and the standard axle size;
An axle determining unit 860 for determining axle data of the target vehicle, the axle data comprising axle type data of each target axle.
Therefore, the vehicle wheel axle identification method and device based on deep learning are applied to the processing module in the wheel axle identification system, and the wheel axle identification system further comprises an imaging module and a vehicle passing detection module; firstly, inputting an image sequence of a target vehicle in a first period from an imaging module into a first recognition model, and determining an axle area image sequence according to the output of the first recognition model, wherein the starting moment of the first period is the moment of receiving a first instruction sent by a passing detection module, and the ending moment of the first period is the moment of receiving a second instruction sent by the passing detection module; then, inputting the axle region image sequence into a second recognition model, and determining axle type data of each axle region image according to the output of the second recognition model, wherein the axle type data comprises a single tire or a double tire; next, a motion prediction analysis is performed on the sequence of wheel axis region images to determine a first number of first wheel axes: then, determining a movement path of each first wheel axle in the first period, and screening the first number of first wheel axles to determine a second number of second wheel axles; then, determining a standard axle position and a standard axle size according to the position and the size of a first frame axle region image in the axle region image sequence, and screening the second number of second axles to determine a third number of target axles conforming to the standard axle position and the standard axle size; finally, axle data for the target vehicle is determined, the axle data including axle type data for each target axle. The wheel axle of the vehicle in various special scenes can be accurately identified, and the accuracy and the robustness of the identification of the wheel axle of the vehicle are greatly improved.
It should be noted that, the specific implementation of each operation may be described in the above-illustrated method embodiment, and the vehicle axle identifying device 800 may be used to perform the above-described method embodiment of the present application, which is not described herein.
The embodiment of the application also provides electronic equipment, which comprises: a processor, a memory, and one or more programs; the one or more programs are stored in the memory and configured to be executed by the processor, the programs including some or all of the steps for performing any of the methods as recited in the method embodiments above.
The embodiment of the application also provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program makes a computer execute part or all of the steps of any one of the methods described in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform part or all of the steps of any one of the methods described in the method embodiments above. The computer program product may be a software installation package.
For the above embodiments, for simplicity of description, the same is denoted as a series of combinations of actions. It will be appreciated by persons skilled in the art that the application is not limited by the order of acts described, as some steps in embodiments of the application may be performed in other orders or concurrently. In addition, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts, steps, modules, or units, etc. that are described are not necessarily required by the embodiments of the application.
In the foregoing embodiments, the descriptions of the embodiments of the present application are emphasized, and in part, not described in detail in one embodiment, reference may be made to related descriptions of other embodiments.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, or may be embodied in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in RAM, flash memory, ROM, EPROM, electrically Erasable EPROM (EEPROM), registers, hard disk, a removable disk, a compact disk read-only (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. In addition, the ASIC may be located in a terminal device or a management device. The processor and the storage medium may reside as discrete components in a terminal device or management device.
Those skilled in the art will appreciate that in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented, in whole or in part, in software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another. For example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a Solid State Drive (SSD)), or the like.
The respective apparatuses and the respective modules/units included in the products described in the above embodiments may be software modules/units, may be hardware modules/units, or may be partly software modules/units, and partly hardware modules/units. For example, for each device or product applied to or integrated on a chip, each module/unit included in the device or product may be implemented in hardware such as a circuit, or at least some modules/units may be implemented in software program, where the software program runs on a processor integrated inside the chip, and the remaining (if any) part of modules/units may be implemented in hardware such as a circuit; for each device and product applied to or integrated in the chip module, each module/unit contained in the device and product can be realized in a hardware manner such as a circuit, different modules/units can be located in the same component (such as a chip, a circuit module and the like) or different components of the chip module, or at least part of the modules/units can be realized in a software program, the software program runs on a processor integrated in the chip module, and the rest (if any) of the modules/units can be realized in a hardware manner such as a circuit; for each device, product, or application to or integrated with the terminal device, each module/unit included in the device may be implemented in hardware such as a circuit, and different modules/units may be located in the same component (e.g., a chip, a circuit module, etc.) or different components in the terminal device, or at least some modules/units may be implemented in a software program, where the software program runs on a processor integrated within the terminal device, and the remaining (if any) some modules/units may be implemented in hardware such as a circuit.
The foregoing detailed description of the embodiments of the present application further illustrates the purposes, technical solutions and advantageous effects of the embodiments of the present application, and it should be understood that the foregoing description is only a specific implementation of the embodiments of the present application, and is not intended to limit the scope of the embodiments of the present application, and any modifications, equivalent substitutions, improvements, etc. made on the basis of the technical solutions of the embodiments of the present application should be included in the scope of the embodiments of the present application.

Claims (10)

1. The vehicle wheel axle identification method based on deep learning is characterized by being applied to a processing module in a wheel axle identification system, wherein the wheel axle identification system further comprises an imaging module and a vehicle passing detection module; the method comprises the following steps:
Inputting an image sequence of a target vehicle in a first period from the imaging module into a first recognition model, and determining an axle region image sequence according to the output of the first recognition model, wherein the starting time of the first period is the time when a first instruction sent by the passing detection module is received, and the ending time of the first period is the time when a second instruction sent by the passing detection module is received;
Inputting the axle region image sequence into a second recognition model, and determining axle type data of each axle region image according to the output of the second recognition model, wherein the axle type data comprises a single tire or a double tire;
Performing motion prediction analysis on the sequence of wheel axis region images to determine a first number of first wheel axes:
Determining a movement path of each first wheel axle in the first period, and screening the first number of first wheel axles to determine a second number of second wheel axles;
Determining a standard axle position and a standard axle size according to the position and the size of a first frame of axle region image in the axle region image sequence, and screening the second number of second axles to determine a third number of target axles conforming to the standard axle position and the standard axle size;
Axle data for the target vehicle is determined, the axle data including axle type data for each target axle.
2. The method of claim 1, wherein the first recognition model comprises a first convolution module, a first spatial frequency module, a second spatial frequency module, a third spatial frequency module, a fourth spatial frequency module, a fifth spatial frequency module, a sixth spatial frequency module, a second convolution module, a first upsampling module, a first connection module, a first depth convolution module, a second upsampling module, a second connection module, a second depth convolution module, a first convolution layer, a third depth convolution module, a first superposition layer, a fourth depth convolution module, a second convolution layer, a fifth depth convolution module, a second superposition layer, a sixth depth convolution module, and a third convolution layer; the inputting the image sequence of the target vehicle in the first period of time from the imaging module into a first identification model, and determining the image sequence of the wheel axle area according to the output of the first identification model comprises:
Preprocessing each image in the image sequence, inputting the preprocessed image into the first recognition model, determining the image sequence of the wheel axle area according to the output of the first recognition model to each image, and inputting each image into the first recognition model after preprocessing to execute the following processing:
sequentially passing through the first convolution module, the first space frequency module and the second space frequency module to obtain first intermediate data;
sequentially passing the first intermediate data through the third space frequency module and the fourth space frequency module to obtain second intermediate data;
Sequentially passing the second intermediate data through the fifth spatial frequency module, the sixth spatial frequency module and the second convolution module to obtain third intermediate data;
the first up-sampling data obtained by the third intermediate data through the first up-sampling module and the second intermediate data are connected through the first connecting module and then pass through the first deep convolution module to obtain fourth intermediate data;
Second up-sampling data obtained by the fourth intermediate data through the second up-sampling module is connected with the first intermediate data through the second connecting module and then passes through the second deep convolution module to obtain fifth intermediate data;
The fifth intermediate data is subjected to the first convolution layer to obtain first output data, and is input into a third depth convolution module and then is overlapped with the fourth intermediate data in the first overlapping layer, and the sixth intermediate data is obtained through the fourth depth convolution module;
And superposing the sixth intermediate data with the third intermediate data in the second superposition layer after passing through the fifth depth convolution module, and sequentially passing through the sixth depth convolution module and the third convolution layer to obtain third output data.
3. The method of claim 1, wherein the second recognition model comprises an initial convolution module, a full pooling layer, a residual network, an average pooling layer, and an output layer; the step of inputting the axle region image sequence into a second recognition model, and determining the axle type data of each axle region image according to the output of the second recognition model comprises the following steps:
And sequentially passing each axle region image through the initial convolution module, the complete pooling layer, the residual error network, the average pooling layer and the output layer to obtain axle data of each axle region image.
4. The method of claim 1, wherein the motion prediction analysis of the sequence of axle region images determines a first number of first axles, comprising:
The following processing is performed on each successive two-frame axle region image to determine the first number of first axles:
Determining first motion data of an axle corresponding to a first axle region image, wherein the first motion data comprises first position data, first acceleration data and first speed data;
determining predicted position data from the first motion data;
Determining second motion data of the wheel axle corresponding to the second wheel axle area image, wherein the second motion data comprises second position data, second acceleration data and second speed data;
If the matching degree of the predicted position data and the second position data is higher than or equal to a preset matching threshold value, determining that the wheel axle corresponding to the first wheel axle area image and the wheel axle corresponding to the second wheel axle area image are the same wheel axle;
And if the matching degree of the predicted position data and the second position data is lower than the preset matching threshold value, determining that the wheel axle corresponding to the first wheel axle area image and the wheel axle corresponding to the second wheel axle area image are different wheel axles.
5. The method of claim 1, wherein said determining the path of movement of each first axle during said first period of time and screening said first number of first axles to determine a second number of second axles comprises:
Determining a movement path of each first wheel axle in the first period, and determining the number of movement paths in a first direction and the number of movement paths in a second direction;
if the number of the motion paths in the first direction is greater than half of the first number, screening out the first wheel shafts corresponding to the motion paths in the first direction as the second wheel shafts, wherein the second number is the number of the motion paths in the first direction;
And if the number of the motion paths in the second direction is greater than half of the first number, screening out the first wheel shafts corresponding to the motion paths in the second direction as the second wheel shafts, wherein the second number is the number of the motion paths in the second direction.
6. The method of claim 1, wherein said determining a standard axle position and standard axle size from the position and size of a first frame axle region image in said sequence of axle region images, screening said second number of second axles to determine a third number of target axles that meet said standard axle position and standard axle size, comprises:
determining a first axle area image determined after receiving the first instruction as a standard axle image;
Determining the standard axle position and the standard axle size according to the height and the size of the standard axle image;
Matching the axle position of each second axle with the standard axle position to obtain a first matching degree, and matching the axle size of each second axle with the standard axle size to obtain a second matching degree;
And determining a second wheel axle with the first matching degree larger than a first matching degree threshold value and the second matching degree larger than a second matching degree threshold value as a target wheel axle so as to obtain the third number of target wheel axles.
7. The method of claim 1, wherein after the determining axle data of the target vehicle, the method further comprises:
And determining the model data of the target vehicle according to the third quantity, the axle type data of each target axle and the arrangement sequence of each target axle.
8. The vehicle wheel axle identification device based on deep learning is characterized by being applied to a processing module in a wheel axle identification system, wherein the wheel axle identification system further comprises an imaging module and a vehicle passing detection module; the device comprises:
The first recognition unit is used for inputting an image sequence of the target vehicle in a first period from the imaging module into a first recognition model, determining an axle area image sequence according to the output of the first recognition model, wherein the starting moment of the first period is the moment of receiving a first instruction sent by the vehicle passing detection module, and the ending moment of the first period is the moment of receiving a second instruction sent by the vehicle passing detection module;
the second recognition unit is used for inputting the axle region image sequence into a second recognition model, and determining the axle type data of each axle region image according to the output of the second recognition model, wherein the axle type data comprises a single tire or a double tire;
a counting unit, configured to perform motion prediction analysis on the sequence of axle region images to determine a first number of first axles:
A first screening unit, configured to determine a movement path of each first axle in the first period, and screen the first number of first axles to determine a second number of second axles;
A second screening unit, configured to determine a standard axle position and a standard axle size according to a position and a size of a first frame axle area image in the axle area image sequence, and screen the second number of second axles to determine a third number of target axles that conform to the standard axle position and the standard axle size;
And the axle determining unit is used for determining axle data of the target vehicle, wherein the axle data comprises axle type data of each target axle.
9. An electronic device, comprising: a processor, a memory, and one or more programs; the one or more programs are stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-7.
10. A computer storage medium storing a computer program comprising program instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1-7.
CN202410346452.9A 2024-03-26 2024-03-26 Vehicle wheel axle identification method and device based on deep learning Pending CN117953460A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410346452.9A CN117953460A (en) 2024-03-26 2024-03-26 Vehicle wheel axle identification method and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410346452.9A CN117953460A (en) 2024-03-26 2024-03-26 Vehicle wheel axle identification method and device based on deep learning

Publications (1)

Publication Number Publication Date
CN117953460A true CN117953460A (en) 2024-04-30

Family

ID=90798302

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410346452.9A Pending CN117953460A (en) 2024-03-26 2024-03-26 Vehicle wheel axle identification method and device based on deep learning

Country Status (1)

Country Link
CN (1) CN117953460A (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3250060C2 (en) * 1981-05-08 1994-05-05 Tuboscope Vetco Int Magnetic flux leakage non-destructive inspection appts.
FR2875541A1 (en) * 2004-09-17 2006-03-24 Siemens Ag Internal combustion engine e.g. diesel engine, synchronizing method, involves finding probability with which camshafts` angular position is found from presence of partial/total concordance of detected camshaft signals with reference models
WO2018157862A1 (en) * 2017-03-02 2018-09-07 腾讯科技(深圳)有限公司 Vehicle type recognition method and device, storage medium and electronic device
CN109919072A (en) * 2019-02-28 2019-06-21 桂林电子科技大学 Fine vehicle type recognition and flow statistics method based on deep learning and trajectory tracking
CN110135421A (en) * 2019-05-17 2019-08-16 梧州学院 Licence plate recognition method, device, computer equipment and computer readable storage medium
CN110481540A (en) * 2018-05-14 2019-11-22 吕杉 Safe stable control system for blowout of automobile tyre
CN111325146A (en) * 2020-02-20 2020-06-23 吉林省吉通信息技术有限公司 Truck type and axle type identification method and system
CN112836631A (en) * 2021-02-01 2021-05-25 南京云计趟信息技术有限公司 Vehicle axle number determining method and device, electronic equipment and storage medium
CN113375618A (en) * 2021-08-12 2021-09-10 宁波柯力传感科技股份有限公司 Vehicle type identification method based on axle distance calculation
CN113392695A (en) * 2021-04-02 2021-09-14 太原理工大学 Highway truck and axle identification method thereof
CN114463626A (en) * 2021-12-30 2022-05-10 浙江大华技术股份有限公司 Vehicle axle identification method, system, electronic device and storage medium
CN115512321A (en) * 2022-09-27 2022-12-23 北京信路威科技股份有限公司 Vehicle weight limit information identification method, computing device and storage medium
CN115855276A (en) * 2022-11-21 2023-03-28 中车青岛四方车辆研究所有限公司 System and method for detecting temperature of key components of urban rail vehicle based on deep learning
CN116457259A (en) * 2021-01-28 2023-07-18 浙江吉利控股集团有限公司 Vehicle driving control method and device, vehicle and storage medium
CN117037105A (en) * 2023-09-28 2023-11-10 四川蜀道新能源科技发展有限公司 Pavement crack filling detection method, system, terminal and medium based on deep learning

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3250060C2 (en) * 1981-05-08 1994-05-05 Tuboscope Vetco Int Magnetic flux leakage non-destructive inspection appts.
FR2875541A1 (en) * 2004-09-17 2006-03-24 Siemens Ag Internal combustion engine e.g. diesel engine, synchronizing method, involves finding probability with which camshafts` angular position is found from presence of partial/total concordance of detected camshaft signals with reference models
WO2018157862A1 (en) * 2017-03-02 2018-09-07 腾讯科技(深圳)有限公司 Vehicle type recognition method and device, storage medium and electronic device
CN110481540A (en) * 2018-05-14 2019-11-22 吕杉 Safe stable control system for blowout of automobile tyre
CN109919072A (en) * 2019-02-28 2019-06-21 桂林电子科技大学 Fine vehicle type recognition and flow statistics method based on deep learning and trajectory tracking
CN110135421A (en) * 2019-05-17 2019-08-16 梧州学院 Licence plate recognition method, device, computer equipment and computer readable storage medium
CN111325146A (en) * 2020-02-20 2020-06-23 吉林省吉通信息技术有限公司 Truck type and axle type identification method and system
CN116457259A (en) * 2021-01-28 2023-07-18 浙江吉利控股集团有限公司 Vehicle driving control method and device, vehicle and storage medium
CN112836631A (en) * 2021-02-01 2021-05-25 南京云计趟信息技术有限公司 Vehicle axle number determining method and device, electronic equipment and storage medium
CN113392695A (en) * 2021-04-02 2021-09-14 太原理工大学 Highway truck and axle identification method thereof
CN113375618A (en) * 2021-08-12 2021-09-10 宁波柯力传感科技股份有限公司 Vehicle type identification method based on axle distance calculation
CN114463626A (en) * 2021-12-30 2022-05-10 浙江大华技术股份有限公司 Vehicle axle identification method, system, electronic device and storage medium
CN115512321A (en) * 2022-09-27 2022-12-23 北京信路威科技股份有限公司 Vehicle weight limit information identification method, computing device and storage medium
CN115855276A (en) * 2022-11-21 2023-03-28 中车青岛四方车辆研究所有限公司 System and method for detecting temperature of key components of urban rail vehicle based on deep learning
CN117037105A (en) * 2023-09-28 2023-11-10 四川蜀道新能源科技发展有限公司 Pavement crack filling detection method, system, terminal and medium based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张念,张亮: "基于深度学习的公路货车车型识别", 交通运输工程学报, vol. 23, no. 1, 28 February 2023 (2023-02-28) *
徐乐先;陈西江;班亚;黄丹;: "基于深度学习的车位智能检测方法", 中国激光, no. 04, 2 February 2019 (2019-02-02) *
蔡煊;王长林;林颖;: "基于多传感器的列车空转及滑行检测与校正方法研究", 城市轨道交通研究, no. 01, 10 January 2015 (2015-01-10) *

Similar Documents

Publication Publication Date Title
CN111311914B (en) Vehicle driving accident monitoring method and device and vehicle
CN108001456A (en) Object Classification Adjustment Based On Vehicle Communication
CN111667048A (en) Convolutional neural network system for object detection and lane detection in a motor vehicle
US9842283B2 (en) Target object detection system and a method for target object detection
US20210295441A1 (en) Using vehicle data and crash force data in determining an indication of whether a vehicle in a vehicle collision is a total loss
Xie et al. CNN-based driving maneuver classification using multi-sliding window fusion
CN110610137B (en) Method and device for detecting vehicle running state, electronic equipment and storage medium
CN109029661B (en) Vehicle overload identification method, device, terminal and storage medium
CN113515985B (en) Self-service weighing system, weighing detection method, weighing detection equipment and storage medium
CN111292432A (en) Vehicle charging type distinguishing method and device based on vehicle type recognition and wheel axle detection
CN110232827B (en) Free flow toll collection vehicle type identification method, device and system
CN116246470A (en) Expressway portal fee evasion checking method and device, electronic equipment and storage medium
CN105632180A (en) System and method of recognizing tunnel entrance vehicle type based on ARM
CN114092683A (en) Tire deformation identification method and device based on visual feedback and depth network
CN114119955A (en) Method and device for detecting potential dangerous target
CN113903180B (en) Method and system for detecting vehicle overspeed on expressway
KR102017059B1 (en) System and method for detecting a vehicle that exceeds the allowable load weight by taking the axle while driving
CN111222394A (en) Muck truck overload detection method, device and system
CN111310696B (en) Parking accident identification method and device based on analysis of abnormal parking behaviors and vehicle
KR20210152025A (en) On-Vehicle Active Learning Method and Apparatus for Learning Perception Network of Autonomous Vehicle
CN117953460A (en) Vehicle wheel axle identification method and device based on deep learning
CN114724107B (en) Image detection method, device, equipment and medium
CN115520216A (en) Driving state judging method and device, computer equipment and storage medium
CN113569702A (en) Deep learning-based truck single-tire and double-tire identification method
CN114255452A (en) Target ranging method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination