CN117746346B - Heavy vehicle load identification method - Google Patents

Heavy vehicle load identification method Download PDF

Info

Publication number
CN117746346B
CN117746346B CN202310710655.7A CN202310710655A CN117746346B CN 117746346 B CN117746346 B CN 117746346B CN 202310710655 A CN202310710655 A CN 202310710655A CN 117746346 B CN117746346 B CN 117746346B
Authority
CN
China
Prior art keywords
vehicle
identified
model
data
deflection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310710655.7A
Other languages
Chinese (zh)
Other versions
CN117746346A (en
Inventor
应国刚
胡洁亮
张文达
陈维敏
任浩东
姚源彬
罗方
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Langda Technology Co ltd
Original Assignee
Ningbo Langda Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Langda Technology Co ltd filed Critical Ningbo Langda Technology Co ltd
Priority to CN202310710655.7A priority Critical patent/CN117746346B/en
Publication of CN117746346A publication Critical patent/CN117746346A/en
Application granted granted Critical
Publication of CN117746346B publication Critical patent/CN117746346B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The application discloses a heavy vehicle load identification method, which comprises the steps of firstly collecting vehicle actuation deflection data of a target section of a bridge to be monitored, processing the vehicle actuation deflection data through a BiLSTM vehicle weight identification model, and obtaining a mapping relation between deflection and vehicle weight according to training and learning so as to reverse the actual vehicle weight of the vehicle to be identified; meanwhile, video image data of the vehicle to be identified passing through the target section of the bridge to be monitored is collected, the video image data is processed through a YOLO target detection model, and the vehicle type of the vehicle to be identified is output; and finally, carrying out information pairing fusion on the actual weight and the vehicle type of the vehicle to be identified, which are output by the BiLSTM weight identification model and the YOLO target detection model, and jointly outputting the fused information to a video monitoring picture for identification result presentation.

Description

Heavy vehicle load identification method
Technical Field
The application relates to the field of bridge safety monitoring, in particular to a heavy vehicle load identification method.
Background
One of the most significant loads to which bridge structures are subjected during operational service is the load of the moving vehicle, wherein an overweight vehicle is the main contributor to damage to the bridge structure. Therefore, the bridge vehicle load identification is important to the operation and maintenance management of the structure, especially for small and medium span bridges with a large number and long service life.
Currently, in practical application of a vehicle load identification method, a bridge dynamic weighing (BWIM) system is widely popularized nationwide, and the system utilizes an image identification technology and a weighing sensor to identify and record the type, the weight, the speed and the like of a vehicle.
However, the existing bridge dynamic weighing system has the following defects: when the passing traffic flow is denser, the recognition accuracy of the system to the vehicle load can be greatly reduced, for example, two-axle vehicles with smaller front and rear gaps and similar vehicle speed are recognized as four-axle vehicles, or the detected vehicle weight information is inaccurate due to the fact that the vehicles do not run according to a normal lane, and the like.
On the other hand, a high-definition camera is arranged on a bridge, and vehicle load space-time information on a bridge structure is identified through algorithms such as target detection.
Disclosure of Invention
An object of the present application is to provide a heavy vehicle load identification method that is faster, real-time and accurate in identifying vehicle load information.
In order to achieve the above purpose, the application adopts the following technical scheme: a heavy vehicle load identification method comprising the steps of:
s100, vehicle weight identification, namely acquiring vehicle actuation deflection data of a target section of a bridge to be monitored, processing the vehicle actuation deflection data through a BiLSTM vehicle weight identification model, and obtaining a mapping relation between deflection and vehicle weight according to training learning so as to reflect the actual vehicle weight of the vehicle to be identified;
S200, vehicle model identification, namely acquiring video image data of a vehicle to be identified passing through a target section of a bridge to be monitored, processing the video image data through a YOLO target detection model, and outputting the vehicle model of the vehicle to be identified;
And S300, information fusion and output, namely, carrying out information pairing fusion on the actual weight and the vehicle type of the vehicle to be identified, which are output by the BiLSTM weight identification model and the YOLO target detection model, and jointly outputting the fused information to a video monitoring picture for identification result presentation.
According to an embodiment of the present application, the step S100 includes the steps of:
S110, installing a deflection measuring instrument at a target section position of a bridge to be monitored, and collecting vehicle actuation deflection data of a plurality of time points at the target section in real time;
s120, finding a peak time point of change in the vehicle actuation deflection data at a plurality of time points, and intercepting a vehicle actuation deflection data sample with a proper time length based on the peak time point;
S130, screening BWIM vehicle weight data of vehicles to be identified in a system database, calculating time T i when the vehicles to be identified pass through a BWIM system and reach the target section through the distance difference between the BWIM system on the bridge to be monitored and the target section, and matching a vehicle actuation deflection data sample with the vehicle weight data according to time T i to form a training data set of a BiLSTM vehicle weight identification model;
S140, constructing BiLSTM a vehicle weight recognition model, inputting a training data set into the BiLSTM vehicle weight recognition model for training and learning until the test result of the BiLSTM vehicle weight recognition model meets the recognition precision.
Preferably, in the step S110, the deflection measuring instrument is a millimeter wave radar, and is mounted to the bottom of the bridge deck structure at the target section position of the bridge to be monitored, and the vehicle actuation deflection data at the target section position is measured and acquired.
Preferably, in the step S120, a peak time point in a suitable deflection amplitude range is found in the vehicle actuation deflection data of a plurality of time points through a peak finding algorithm.
Preferably, in the step S130, the calculation formula of the time T i for the vehicle to reach the target section is as follows:
Ti=ti±L/v
Wherein t i is the time when the BWIM system identifies the vehicle to be identified, L is the distance difference between the BWIM system on the bridge to be monitored and the target section, v is the speed when the BWIM system identifies the vehicle to be identified passes through, and the speed before the vehicle to be identified runs to the target section position is assumed to be constant.
According to an embodiment of the present application, the step S200 includes the steps of:
S210, arranging a monitoring instrument near a target section of a bridge to be monitored, and taking the monitoring instrument as acquisition equipment of video image data of a YOLO target detection model for identifying a vehicle type to be identified at the target section;
S220, constructing a YOLO target detection model, performing data marking on collected video image data of different vehicle types to form a training data set of the YOLO target detection model, and inputting the training data set into the YOLO target detection model for training until the test result of the YOLO target detection model meets the recognition precision.
Preferably, in the step S210, the monitoring apparatus is a high-definition camera and is mounted on the auxiliary structure of the bridge near the target section of the bridge to be monitored at a proper height and angle.
According to an embodiment of the present application, the step S300 includes the steps of:
s310, uploading the packaged model programs of the BiLSTM vehicle weight recognition model and the YOLO target detection model and model training parameters to a server;
S320, when the vehicle to be identified runs through the target section, the deflection measuring instrument transmits the acquired real-time vehicle actuation deflection data to a database of a server, and the server inputs the vehicle actuation deflection data received in real time to a BiLSTM vehicle weight identification model program for calculation to obtain a vehicle weight result of the vehicle to be identified;
S330, when the vehicle to be identified runs in the identification range of the monitoring instrument, video image data are collected and input into a YOLO target detection model to identify the vehicle type of the vehicle to be identified;
and S340, fusing the output results of the BiLSTM vehicle weight recognition model and the YOLO target detection model and feeding back to the video monitoring picture at the same time.
Preferably, in step S320, the server extracts the vehicle actuation deflection data received in real time according to a preset time length, and then inputs the extracted vehicle actuation deflection data into the BiLSTM vehicle weight recognition model program.
Preferably, the vehicle actuated deflection data includes at least a deflection time course and a time corresponding thereto.
Compared with the prior art, the application has the beneficial effects that:
1. The heavy vehicle load identification method adopts a light-weight data acquisition means and technology, and can rapidly process acquired data by utilizing BiLSTM and a YOLO model, and can detect heavy vehicles in real time and identify the corresponding vehicle weights, so that the tracking and recording of each heavy vehicle are realized, and the identification and response speed of the means to the load of the target vehicle is faster and real-time compared with that of the conventional BWIM system.
2. The heavy vehicle load identification method provided by the application processes and learns the information related to the vehicle load by using a deep learning model through sensing and acquiring the information related to the vehicle load from different angles, so that the derivation and judgment of the vehicle load information are realized, and the basic principle is that the target vehicle load is more accurately identified by processing multi-source heterogeneous data and fusing multi-element information to make decisions.
3. The heavy vehicle load identification method has stronger anti-interference performance, integrates the aspects of deflection measurement, carries out deflection measurement on the bridge structure based on millimeter wave radar, has higher precision and receives fewer external interference factors; secondly, when training data samples of the deep learning model are sufficient, deeper high-dimensional characteristics between input data information and output data information can be learned, and noise influence is greatly reduced; the combination of the two can further reduce noise influence.
Drawings
FIG. 1 is a system flow diagram in accordance with a preferred embodiment of the present application;
FIG. 2 is a schematic illustration of vehicle actuated deflection data according to a preferred embodiment of the present application;
fig. 3 is an exemplary diagram of recognition results according to a preferred embodiment of the present application.
Detailed Description
The present application will be further described with reference to the following specific embodiments, and it should be noted that, on the premise of no conflict, new embodiments may be formed by any combination of the embodiments or technical features described below.
In the description of the present application, it should be noted that, for the azimuth words such as terms "center", "lateral", "longitudinal", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", etc., the azimuth and positional relationships are based on the azimuth or positional relationships shown in the drawings, it is merely for convenience of describing the present application and simplifying the description, and it is not to be construed as limiting the specific scope of protection of the present application that the device or element referred to must have a specific azimuth configuration and operation.
It should be noted that the terms "first," "second," and the like in the description and in the claims are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The terms "comprises" and "comprising," along with any variations thereof, in the description and claims, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The application is further described below with reference to the accompanying drawings:
the different information presented by the same thing constitutes the knowledge of people about the thing, namely, the more information types are received, the deeper the understanding of the information types is, and the more accurate the information is identified.
The method disclosed by the application combines visual perception and measurement of physical characteristics of the structure, and can overcome the defect of inaccurate recognition precision caused by insufficient information quantity of a single method.
The vehicle identification based on machine vision perception enhances the visualization capability of the method provided by the invention, so that the identification effect is more visual; the measurement of the structural deformation can directly reflect the load of the vehicle to be identified, and the BiLSTM model can learn the mapping relation between the deflection response similar to the concept of structural rigidity and the load, so that the vehicle weight can be accurately identified.
The method is used for solving the problems that in the vehicle load identification process of BWIM systems, the accuracy of vehicle load information identification is reduced due to the conditions of dense traffic flow, improper driving of drivers and the like, and the vehicle load identification method based on the target detection technology can identify bridge deck vehicle load information in real time, but has larger error in judging the weight of an overload heavy truck, single information source and limitation on identification capability, and in the vehicle load identification, the rapid, real-time and accurate identification method requires high-efficiency processing and information fusion decision-making on multi-source heterogeneous data.
The method can utilize real-time monitoring to collect multi-source heterogeneous data and perform quick processing, then fuse the multi-element information to make a decision, and can more quickly, real-time and accurately detect and identify the load of the target vehicle, thereby realizing the tracking and recording of each heavy vehicle.
The data acquisition frequency is high, and the calculation speed of the deep learning model is high, so that real-time data processing and recognition feedback can be realized.
Specifically, as shown in fig. 1 to 3, a preferred embodiment of the present application includes the steps of:
s100, vehicle weight identification, namely acquiring vehicle actuation deflection data of a target section of a bridge to be monitored, processing the vehicle actuation deflection data through a BiLSTM (two-way long-short-term memory network) vehicle weight identification model, and obtaining a mapping relation between deflection and vehicle weight according to training and learning so as to invert the actual vehicle weight of the corresponding vehicle to be identified;
S200, vehicle model identification, namely acquiring video image data of a vehicle to be identified passing through a target section of a bridge to be monitored, processing the video image data through a YOLO (you only look once, meaning that the type and the position of an object in the graph can be identified only by browsing once) target detection model, and outputting the vehicle model of the vehicle to be identified;
And S300, information fusion and output, namely, carrying out information pairing fusion on the actual weight and the vehicle type of the vehicle to be identified, which are output by the BiLSTM weight identification model and the YOLO target detection model, and jointly outputting the fused information to a video monitoring picture for identification result presentation.
The BiLSTM vehicle weight recognition model and the YOLO target detection model can rapidly process multi-source heterogeneous data, so that a rapid recognition speed is obtained, meanwhile, the BiLSTM vehicle weight recognition model and the YOLO target detection model can have high recognition accuracy after construction and training are completed, and rapid recognition output under the condition of dense traffic flow or improper driving of a driver can be met.
In some embodiments, the information collection and processing of step S100 and step S200 may be performed synchronously, so as to obtain a faster recognition speed and achieve real-time interaction of information.
Notably, the vehicle actuated deflection data may include at least a deflection time course (in mm) and a time (in s) corresponding thereto, with the vehicle actuated deflection data samples being obtained by multi-point recording of deflection strokes over different times.
In some embodiments, a plurality of target sections can be arranged on the bridge to be monitored to identify and monitor the vehicle, and information among the plurality of target sections can be subjected to comparison screening and pairing fusion, so that the probability of missing identification or identification errors is greatly reduced.
Specifically, step S100 includes the steps of:
s110, installing a deflection measuring instrument at a target section position of a bridge to be monitored, and collecting vehicle actuation deflection data of a plurality of time points at the target section in real time, wherein the vehicle actuation deflection data can be collected from one point at the target section or from a plurality of different points at the target section;
s120, finding a peak time point of change in the vehicle actuation deflection data at a plurality of time points, and intercepting a vehicle actuation deflection data sample with a proper time length based on the peak time point;
S130, screening BWIM vehicle weight data of vehicles to be identified in a system database, calculating time T i when the vehicles to be identified pass through a BWIM system and reach the target section through the distance difference between the BWIM system on the bridge to be monitored and the target section, and matching a vehicle actuation deflection data sample with the vehicle weight data according to time T i to form a training data set of a BiLSTM vehicle weight identification model;
S140, constructing BiLSTM a vehicle weight recognition model, inputting a training data set into the BiLSTM vehicle weight recognition model for training and learning until the test result of the BiLSTM vehicle weight recognition model meets the recognition precision.
In some embodiments, the deflection measuring instrument in step S110 preferably uses a millimeter wave radar, and the millimeter wave radar can acquire vehicle-actuated deflection data of the bridge to be detected, which is not easily interfered by ambient weather, so as to improve accuracy of the acquired data.
The millimeter wave radar can be installed to the bottom of the bridge deck structure at the target section position of the bridge to be monitored to measure and collect the vehicle-actuated deflection data at the target section position, the bridge deck structure of the bridge to be detected is utilized to form protection, the probability that the millimeter wave radar is interfered by external factors is further reduced, meanwhile, the vehicle-actuated deflection data are mainly generated from the bridge deck structure, and therefore the vehicle-actuated deflection data at the target section position of the bridge to be detected can be accurately obtained.
As shown in fig. 2, in step S120, a peak time point in a suitable deflection amplitude range can be found in the vehicle actuation deflection data of multiple time points by the peak searching algorithm, where the main purpose of selecting the suitable deflection amplitude range is to play a role in noise reduction, so that the small deflection amplitude generated when some small vehicles pass is avoided from interfering with the peak time point formed when the heavy vehicle to be identified to be searched by the peak searching algorithm passes.
In some embodiments, the vehicle actuation deflection data may be generated for a plurality of time points into a spectrum, and peak time points within a suitable deflection amplitude range may be found in the spectrum by a peak finding algorithm.
In some embodiments, the calculation formula of the time T i for the vehicle to reach the target section in step S130 is as follows:
Ti=ti±L/v
T i is the time of identifying the vehicle to be identified by the BWIM system, L is the distance difference between the BWIM system on the bridge to be monitored and the target section, and v is the speed of the vehicle to be identified by the BWIM system when passing through.
T i is calculated by fully considering BWIM the difference in distance between the system and the target cross-section.
The purpose of adding T i to the above formula is that the BWIM system recognizes that the vehicle to be recognized has a small delay, and the influence of the delay on the numerical accuracy of T i is avoided.
It should be noted that, in the above formula, after the speed of the vehicle to be identified by the BWIM system passes, it is necessary to assume that the speed of the vehicle to be identified before the vehicle to be identified travels to the target section position is regarded as constant, so as to avoid interference to the calculation result of T i, and meanwhile, in order to avoid a larger error between the numerical value of T i and the actual numerical value caused by the change of the speed of the vehicle to be identified, the setting of the BWIM system and the target section can be as close as possible, the distance between the two is reduced, and the numerical accuracy of T i is improved, so that the identification accuracy is improved.
Specifically, step S200 includes the steps of:
S210, arranging a monitoring instrument near a target section of a bridge to be monitored, and taking the monitoring instrument as acquisition equipment of video image data of a YOLO target detection model for identifying a vehicle type to be identified at the target section;
S220, constructing a YOLO target detection model, performing data marking on collected video image data of different vehicle types to form a training data set of the YOLO target detection model, and inputting the training data set into the YOLO target detection model for training until the test result of the YOLO target detection model meets the recognition precision.
In some embodiments, the monitoring apparatus in step S210 selects high-definition cameras, where one or more high-definition cameras may be installed according to requirements, and the high-definition cameras are installed on the auxiliary bridge structure near the target section of the bridge to be monitored at a proper height and angle, and by arranging the high-definition cameras to detect the bridge deck all the day, the probability of vehicle missing recognition can be greatly reduced.
The vehicle identification form based on machine vision perception enhances the visualization capability of the method provided by the application, so that the identification effect is more visual, and the identification can be conveniently distinguished by personnel.
Specifically, step S300 includes the steps of:
s310, uploading the packaged model programs of the BiLSTM vehicle weight recognition model and the YOLO target detection model and model training parameters to a server;
S320, when the vehicle to be identified runs through the target section, the deflection measuring instrument transmits the acquired real-time vehicle actuation deflection data to a database of a server, and the server inputs the vehicle actuation deflection data received in real time to a BiLSTM vehicle weight identification model program for calculation to obtain a vehicle weight result of the vehicle to be identified;
S330, when the vehicle to be identified runs in the identification range of the monitoring instrument, video image data are collected, then data in the frame number picture are extracted, and the data are input into a YOLO target detection model to identify the vehicle type of the vehicle to be identified;
and S340, fusing the output results of the BiLSTM vehicle weight recognition model and the YOLO target detection model and feeding back to the video monitoring picture at the same time.
In some embodiments, the server in step S320 extracts the vehicle actuation deflection data received in real time according to a preset time length, and then inputs the extracted vehicle actuation deflection data into the BiLSTM vehicle weight recognition model program, so as to eliminate noise and improve accuracy.
It is noted that the server may be a local server, and may be kept running without networking, while avoiding data leakage.
In some embodiments, the fused output result can be fed back to the video monitoring picture in the form of frame selection and text display, and is followed dynamically, so that the observation and monitoring are facilitated.
After the BiLSTM vehicle weight recognition model and the YOLO target detection model finish training and learning, the assistance of the BWIM system can be canceled, and the recognition and detection of the vehicle to be recognized can be finished only by processing the data by means of the BiLSTM vehicle weight recognition model and the YOLO target detection model.
The foregoing has outlined the basic principles, features, and advantages of the present application. It will be understood by those skilled in the art that the present application is not limited to the embodiments described above, and that the above embodiments and descriptions are merely illustrative of the principles of the present application, and various changes and modifications may be made therein without departing from the spirit and scope of the application, which is defined by the appended claims. The scope of the application is defined by the appended claims and equivalents thereof.

Claims (9)

1. A method for identifying a load of a heavy vehicle, comprising the steps of:
s100, vehicle weight identification, namely acquiring vehicle actuation deflection data of a target section of a bridge to be monitored, processing the vehicle actuation deflection data through a BiLSTM vehicle weight identification model, and obtaining a mapping relation between deflection and vehicle weight according to training learning so as to reflect the actual vehicle weight of the vehicle to be identified;
S200, vehicle model identification, namely acquiring video image data of a vehicle to be identified passing through a target section of a bridge to be monitored, processing the video image data through a YOLO target detection model, and outputting the vehicle model of the vehicle to be identified;
s300, information fusion output, namely carrying out information pairing fusion on the actual weight and the vehicle type of the vehicle to be identified, which are output by the BiLSTM weight identification model and the YOLO target detection model, and jointly outputting the fused information to a video monitoring picture for identification result presentation;
The step S100 includes the steps of:
S110, installing a deflection measuring instrument at a target section position of a bridge to be monitored, and collecting vehicle actuation deflection data of a plurality of time points at the target section in real time;
s120, finding a peak time point of change in the vehicle actuation deflection data at a plurality of time points, and intercepting a vehicle actuation deflection data sample with a proper time length based on the peak time point;
S130, screening BWIM vehicle weight data of vehicles to be identified in a system database, calculating time T i when the vehicles to be identified pass through a BWIM system and reach the target section through the distance difference between the BWIM system on the bridge to be monitored and the target section, and matching a vehicle actuation deflection data sample with the vehicle weight data according to time T i to form a training data set of a BiLSTM vehicle weight identification model;
S140, constructing BiLSTM a vehicle weight recognition model, inputting a training data set into the BiLSTM vehicle weight recognition model for training and learning until the test result of the BiLSTM vehicle weight recognition model meets the recognition precision.
2. The heavy vehicle load identification method of claim 1, wherein: in step S110, a millimeter wave radar is selected as a deflection measuring instrument, and the deflection measuring instrument is mounted at the bottom of a bridge deck structure at a target section position of a bridge to be monitored, and vehicle actuation deflection data at the target section position is measured and acquired.
3. The heavy vehicle load identification method of claim 1, wherein: in the step S120, a peak time point in a suitable deflection amplitude range is found in the vehicle actuation deflection data of a plurality of time points by a peak finding algorithm.
4. The heavy vehicle load identification method of claim 1, wherein: in the step S130, the calculation formula of the time T i for the vehicle to reach the target section is as follows:
Ti=ti±L/v
Wherein t i is the time when the BWIM system identifies the vehicle to be identified, L is the distance difference between the BWIM system on the bridge to be monitored and the target section, v is the speed when the BWIM system identifies the vehicle to be identified passes through, and the speed before the vehicle to be identified runs to the target section position is assumed to be constant.
5. The heavy vehicle load recognition method according to claim 1, wherein said step S200 comprises the steps of:
S210, arranging a monitoring instrument near a target section of a bridge to be monitored, and taking the monitoring instrument as acquisition equipment of video image data of a YOLO target detection model for identifying a vehicle type to be identified at the target section;
S220, constructing a YOLO target detection model, performing data marking on collected video image data of different vehicle types to form a training data set of the YOLO target detection model, and inputting the training data set into the YOLO target detection model for training until the test result of the YOLO target detection model meets the recognition precision.
6. The heavy vehicle load identification method of claim 5, wherein: in step S210, the monitoring instrument is a high-definition camera and is mounted on the auxiliary structure of the bridge near the target section of the bridge to be monitored at a proper height and angle.
7. The heavy vehicle load recognition method according to claim 1, wherein said step S300 includes the steps of:
s310, uploading the packaged model programs of the BiLSTM vehicle weight recognition model and the YOLO target detection model and model training parameters to a server;
S320, when the vehicle to be identified runs through the target section, the deflection measuring instrument transmits the acquired real-time vehicle actuation deflection data to a database of a server, and the server inputs the vehicle actuation deflection data received in real time to a BiLSTM vehicle weight identification model program for calculation to obtain a vehicle weight result of the vehicle to be identified;
S330, when the vehicle to be identified runs in the identification range of the monitoring instrument, video image data are collected and input into a YOLO target detection model to identify the vehicle type of the vehicle to be identified;
and S340, fusing the output results of the BiLSTM vehicle weight recognition model and the YOLO target detection model and feeding back to the video monitoring picture at the same time.
8. The heavy vehicle load identification method of claim 7, wherein: in step S320, the server extracts the vehicle actuation deflection data received in real time according to a preset time length, and then inputs the extracted vehicle actuation deflection data into a BiLSTM vehicle weight recognition model program.
9. A heavy vehicle load identification method according to any one of claims 1 to 8, characterized in that: the vehicle actuated deflection data includes at least a deflection time course and a time corresponding thereto.
CN202310710655.7A 2023-06-14 2023-06-14 Heavy vehicle load identification method Active CN117746346B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310710655.7A CN117746346B (en) 2023-06-14 2023-06-14 Heavy vehicle load identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310710655.7A CN117746346B (en) 2023-06-14 2023-06-14 Heavy vehicle load identification method

Publications (2)

Publication Number Publication Date
CN117746346A CN117746346A (en) 2024-03-22
CN117746346B true CN117746346B (en) 2024-06-28

Family

ID=90251404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310710655.7A Active CN117746346B (en) 2023-06-14 2023-06-14 Heavy vehicle load identification method

Country Status (1)

Country Link
CN (1) CN117746346B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111442822A (en) * 2020-05-08 2020-07-24 上海数久信息科技有限公司 Method and device for detecting load of bridge passing vehicle
CN112179467A (en) * 2020-11-27 2021-01-05 湖南大学 Bridge dynamic weighing method and system based on video measurement of dynamic deflection

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114548375B (en) * 2022-02-23 2024-02-13 合肥工业大学 Cable-stayed bridge girder dynamic deflection monitoring method based on two-way long-short-term memory neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111442822A (en) * 2020-05-08 2020-07-24 上海数久信息科技有限公司 Method and device for detecting load of bridge passing vehicle
CN112179467A (en) * 2020-11-27 2021-01-05 湖南大学 Bridge dynamic weighing method and system based on video measurement of dynamic deflection

Also Published As

Publication number Publication date
CN117746346A (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN112816954B (en) Road side perception system evaluation method and system based on true value
CN102628812B (en) System and method for automatically judging subvolume surface quality grade
CN117058600B (en) Regional bridge group traffic load identification method and system
CN104183133B (en) A kind of method gathered and transmit road traffic flow state information
JP2010197249A (en) Vehicle weight measuring system and method for vehicle passing on bridge, and computer program
US11593952B2 (en) Structural vibration monitoring method based on computer vision and motion compensation
US20230083004A1 (en) Method of monitoring health status of bridges in normal traffic conditions
CN113378741B (en) Auxiliary sensing method and system for aircraft tractor based on multi-source sensor
CN109598947B (en) Vehicle identification method and system
CN111591715A (en) Belt longitudinal tearing detection method and device
CN111830470B (en) Combined calibration method and device, target object detection method, system and device
CN115373403B (en) Inspection service system for construction machinery equipment
CN115527364B (en) Traffic accident tracing method and system based on radar data fusion
CN115063017A (en) Monitoring and evaluating system and method for small and medium-span bridge structure
CN108732313A (en) Urban air pollution object concentration intelligence observation system
CN103473925B (en) A kind of verification method of road vehicle detection system
CN117746346B (en) Heavy vehicle load identification method
CN116702000B (en) Road surface quality dynamic monitoring and evaluating method based on multi-layer data fusion
CN108614803A (en) A kind of meteorological data method of quality control and system
CN116311150B (en) Bridge damage assessment and early warning method based on specific vehicle deflection monitoring
US20230169681A1 (en) Quantitative evaluation method and system for prediction result of remote sensing inversion
CN110909607A (en) Device system for sensing passenger flow in intelligent subway operation
CN107870084A (en) The contactless monitoring method and system of train bogie
Kawakatsu et al. Deep learning approach to modeling bridge dynamics using cameras and sensors
CN115165053B (en) Vehicle load identification method integrating video and BP neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 21-1, Building 028, Building 5, No. 15, Lane 587, Juxian Road, High tech Zone, Yinzhou District, Ningbo City, Zhejiang Province, 315000

Applicant after: Ningbo Langda Technology Co.,Ltd.

Address before: 21-1, Building 028, Building 5, No. 15, Lane 587, Juxian Road, High tech Zone, Yinzhou District, Ningbo City, Zhejiang Province, 315000

Applicant before: Ningbo Landa Engineering Technology Co.,Ltd.

Country or region before: China

CB02 Change of applicant information
GR01 Patent grant