CN113850752A - Vehicle overload detection method, device and storage medium - Google Patents

Vehicle overload detection method, device and storage medium Download PDF

Info

Publication number
CN113850752A
CN113850752A CN202110694113.6A CN202110694113A CN113850752A CN 113850752 A CN113850752 A CN 113850752A CN 202110694113 A CN202110694113 A CN 202110694113A CN 113850752 A CN113850752 A CN 113850752A
Authority
CN
China
Prior art keywords
vehicle
information
image
target
target vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110694113.6A
Other languages
Chinese (zh)
Inventor
郝行猛
舒梅
杨文韬
牛中彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110694113.6A priority Critical patent/CN113850752A/en
Publication of CN113850752A publication Critical patent/CN113850752A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses vehicle overload detection method, equipment and storage medium, and the vehicle overload detection method comprises the following steps: acquiring a target image containing a target vehicle, and identifying the target image to acquire vehicle information of the target vehicle; acquiring weight limit information of the target vehicle according to the vehicle information; and determining the load information of the target vehicle, and comparing the load information with the weight limit information to obtain an overload detection result of the target vehicle. According to the vehicle overload detection method, the vehicle information of the target vehicle is obtained by identifying the image sequence, the weight limit information is obtained according to the vehicle information of the target vehicle, the weight limit information is compared with the obtained actual load information of the target vehicle, and then the overload detection result of the target vehicle can be obtained. The automatic judgment of the overload of the target vehicle is realized, the efficiency is high, the error rate is low, and the traffic environment construction with safe assistance is effective.

Description

Vehicle overload detection method, device and storage medium
Technical Field
The application belongs to the technical field of image recognition, and particularly relates to a vehicle overload detection method, vehicle overload detection equipment and a storage medium.
Background
Based on the large environmental background of 'smart cities' and 'safe cities', the logistics transportation industry and the life safety of people are inseparable, and the mature application of the simple and effective overload automatic judgment method of the cargo vehicle can greatly assist the construction of the current smart safe cities. Most of the current logistics scenes and checkpoint scenes are still judged by means of manually reading wagon balance data, so that the efficiency is low, and the error rate is high. Therefore, under the current artificial intelligence-based era background, the realization of stable and reliable overload snapshot and automatic judgment method of the cargo vehicle is particularly important for safe traffic environment construction.
Disclosure of Invention
The application provides a vehicle overload detection method, equipment and a storage medium, which aim to solve the technical problem of low vehicle overload judgment efficiency.
In order to solve the technical problem, the application adopts a technical scheme that: a vehicle overload detection method, comprising: acquiring a target image containing a target vehicle, and identifying the target image to acquire vehicle information of the target vehicle; acquiring weight limit information of the target vehicle according to the vehicle information; and determining the load information of the target vehicle, and comparing the load information with the weight limit information to obtain an overload detection result of the target vehicle.
According to an embodiment of the present application, the acquiring a target image including a target vehicle, and recognizing the target image to obtain vehicle information of the target vehicle includes: acquiring a head image, a body image and a tail image of the target vehicle; recognizing the vehicle head image and acquiring license plate information; identifying the vehicle body image and acquiring vehicle axle number information; and recognizing the vehicle tail image, and acquiring the license plate information and trailer license plate information.
According to an embodiment of the present application, the acquiring the head image, the body image, and the tail image of the target vehicle includes: capturing the vehicle head image, wherein the vehicle head image is captured by a main camera arranged in a driving-in stage of the running of the target vehicle; capturing the vehicle body image, wherein the vehicle body image is captured by a side auxiliary camera arranged at a transverse stage of the running of the target vehicle; capturing the vehicle tail image, wherein the vehicle tail image is captured by a secondary camera arranged in an exit stage of the running of the target vehicle; wherein, the main camera, the auxiliary camera and the side auxiliary camera are arranged in a linkage manner.
According to an embodiment of the application, the capturing of the images of the vehicle head, the capturing of the images of the vehicle tail or the capturing of the images of the vehicle body respectively comprises: acquiring a sequence of images comprising successive frames of images acquired by corresponding cameras; sequentially identifying the continuous multi-frame images, and taking the image with the first confidence coefficient higher than a preset value as an initial frame image; identifying whether each subsequent frame of image in the image sequence has a corresponding preset target from the initial frame of image; in response to the presence of more than a predetermined proportion of the predetermined class of objects in a predetermined number of successive frame images; and capturing the target vehicle by using the corresponding preset capturing line.
According to an embodiment of the application, predetermine the snapshot line and include and roll into the snapshot line, roll out the snapshot line and transversely snapshot the line, utilize and correspond and predetermine the snapshot line pair and roll into the target vehicle carries out the snapshot, include: responding to the lower boundary of the driving-in vehicle head detection frame to touch the driving-in snapshot line; starting a snapshot operation to obtain the locomotive image; or responding to the lower boundary of the outgoing vehicle tail detection frame to touch the outgoing snapshot line; starting a snapshot operation to obtain the vehicle tail image; or responding to the fact that the distance between the horizontal coordinate of the center line of the transverse vehicle body detection frame and the transverse snapping line is less than a preset threshold value; and starting snapshot operation to acquire the vehicle body image.
According to an embodiment of the present application, the recognizing the vehicle body image and acquiring the vehicle axle number information includes: obtaining a wheel detection frame of a wheel target in the vehicle body image; in response to the number of the wheel detection frames being more than three, removing wheel detection frames having a height not within a predetermined height range and a width not within a predetermined width range; determining the number of remaining wheel detection frames as the vehicle axle number information.
According to an embodiment of the present application, the recognizing the vehicle body image and acquiring the vehicle axle number information includes: determining the average width and the average height of the center point of the wheel detection frame; responding to the fact that the distance between the center points of two adjacent wheel detection frames is smaller than a preset center point distance; removing the wheel detection frames with lower confidence level in the two adjacent wheel detection frames; responding to the central point coordinate value of the wheel detection frame being smaller than a predetermined central point coordinate value; removing the wheel detection frame with the central point coordinate value smaller than the preset central point coordinate value; determining the number of remaining wheel detection frames as the vehicle axle number information.
According to an embodiment of the present application, the acquiring weight limit information of the target vehicle according to the vehicle information includes: inquiring a prefabricated vehicle weight limit information coding table by using the vehicle information, wherein the vehicle information comprises license plate information, vehicle axle number information and trailer license plate information; and obtaining the weight limit information of the target vehicle.
According to an embodiment of the present application, the comparing the load information with the weight limit information to obtain the overload detection result of the target vehicle includes: in response to the load information being above the weight limit information; and using vehicle state coding information consisting of the license plate information, the vehicle axle number information, the trailer license plate information, an overload marker bit and the load information as an overweight detection result.
In order to solve the above technical problem, the present application adopts another technical solution: an electronic device comprises a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the above method.
In order to solve the above technical problem, the present application adopts another technical solution: a computer-readable storage medium, on which program data are stored, which program data, when being executed by a processor, carry out the above-mentioned method.
The vehicle overload detection method at least has the following advantages: according to the vehicle overload detection method, the vehicle information of the target vehicle is obtained by identifying the image sequence, the weight limit information is obtained according to the vehicle information of the target vehicle, the weight limit information is compared with the obtained actual load information of the target vehicle, and then the overload detection result of the target vehicle can be obtained. The automatic judgment of the overload of the target vehicle is realized, the efficiency is high, the error rate is low, and the traffic environment construction with safe assistance is effective.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without inventive efforts, wherein:
FIG. 1 is a schematic flow diagram of one embodiment of vehicle overload detection of the present application;
FIG. 2 is a schematic view of a camera arrangement in one embodiment of vehicle overload detection of the present application;
FIG. 3 is a schematic diagram of a snapshot ruled line arrangement in one embodiment of vehicle overload detection according to the present application;
FIG. 4 is a schematic illustration of vehicle status encoded information in one embodiment of vehicle overload detection of the present application;
FIG. 5 is a block diagram of one embodiment of the vehicle overload detection of the present application;
FIG. 6 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 7 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1 to 4, fig. 1 is a schematic flow chart illustrating an embodiment of vehicle overload detection according to the present application; FIG. 2 is a schematic view of a camera arrangement in one embodiment of vehicle overload detection of the present application; FIG. 3 is a schematic diagram of a snapshot ruled line arrangement in one embodiment of vehicle overload detection according to the present application; FIG. 4 is a schematic diagram of vehicle status encoded information in an embodiment of vehicle overload detection according to the present application.
An embodiment of the application provides a vehicle overload detection method, which comprises the following steps:
s11: and acquiring a target image containing the target vehicle, and identifying the target image to acquire the vehicle information of the target vehicle.
In one embodiment, the obtaining the target image packet including the target vehicle obtains a head image, a body image, and a tail image of the target vehicle.
In the camera view field, a complete piece of vehicle driving state information comprises three stages of driving-in, transverse and driving-out. In this embodiment, a target image is acquired by setting a main camera, a sub camera and a side sub camera which are arranged in a linkage manner, as shown in fig. 2, wherein the main camera is arranged in a driving-in stage of a target vehicle and used for capturing and identifying a head image in the driving-in stage, the sub camera is arranged in a driving-out stage of the target vehicle and used for capturing and identifying a tail image in the driving-out stage, and the side sub camera is arranged in a transverse stage of the target vehicle and used for capturing and identifying a body image in the transverse stage.
Firstly, the preset parameters and the rule lines related in the method are introduced:
1. setting parameters:
the preset parameters mainly include a confidence threshold value Conf _ thresh of a vehicle detection frame BBox, and proportional coefficients of a drive-in line, a drive-out line and a transverse snapshot line, which are respectively ratio _ in, ratio _ out and ratio _ cross, and a category attribute queue length N. And setting the current frame with the target confidence coefficient larger than Conf _ thresh as a displacement statistical initial frame to ensure the stability of the initial state of the target vehicle.
2. Setting a rule line:
the preset rule line mainly comprises incoming and outgoing snapshot lines [ snap _ line _ in, snap _ line _ out ] and a transverse snapshot line snap _ line _ cross, and assuming that the image resolution is w × h, the snapshot line initialization position setting formula is as follows:
snap_line_in=h*ratio_in
snap_line_out=h*ratio_out
snap_line_cross=w*ratio_cross
as shown in fig. 3, the main camera and the sub-camera share a set of capturing mechanism, that is, a longitudinal capturing rule, and if the id value of Obj _ Class is 0, the main camera and the sub-camera can drive into a capturing line snap _ line _ in; if the id value of Obj _ Class is 1, then the snap line snap _ line _ out is enabled. Wherein, ratio _ in and ratio _ out are proportionality coefficients of the driving-in and driving-out snapping lines relative to the image height h respectively.
As shown in fig. 3, the side-assistant camera adopts the horizontal snapshot rule, and enables the horizontal snapshot line snap _ line _ cross if the id value of Obj _ Class is 2. ratio cross is the scaling factor of the lateral snap lines with respect to the image width w.
The method for capturing the images of the vehicle head, the vehicle body and the vehicle tail is described as follows:
firstly, a snapshot locomotive image includes:
a sequence of images is acquired, comprising successive frames of images acquired by a corresponding camera, i.e. the main camera.
Sequentially identifying continuous multi-frame images in an image sequence, training data by a built convolutional neural network cfg configuration file through a YOLO deep learning target detection method, and acquiring target detection network model weights; the detection model is used for detecting and positioning a target vehicle and a target wheel in the image sequence, and obtaining a detection frame BBox and the type attribute of each BBox (the type attribute of the freight vehicle in the proposal comprises three types of a head, a tail and a body, and the id values of Obj _ Class are respectively 0, 1 and 2).
And taking the image of which the confidence coefficient of the first vehicle head detection frame BBox is higher than the preset value Conf _ thresh as an initial frame image to ensure the stability of the initial state of the target vehicle. If the confidence of the vehicle head detection frame BBox is below the predetermined value Conf _ thresh, the vehicle head distance may be long, and the captured vehicle head image is small, which is not beneficial to recognition.
And identifying whether each frame of subsequent image in the image sequence has a corresponding preset target, namely a vehicle head target, from the initial frame of image. If the head target exists, writing the frame image into a head queue by 1, and if the head target does not exist, writing the frame image into the head queue by 0, wherein the total length of the head queue is N.
In response to the presence of a predetermined class of objects, i.e., a headway object, in a predetermined number of consecutive frame images in excess of a predetermined proportion. For example, if the sum of the queue elements of the head target exceeds N/2, the id of Obj _ Class of the category is assigned correspondingly, and the head target is assigned with 0. Specifically, N may be 15 frames, and if one frame is acquired when the image sequence is acquired, the number of the 15 frames of images is about 2 s.
And snapping the target vehicle by using the corresponding preset snapping line, namely snapping the target vehicle by using the driving snapping line snap _ line _ in. Specifically, enabling operation may be performed on a corresponding capture line according to the id value of Obj _ Class of the detection target in the current camera view.
And responding to the lower boundary of the driving-in locomotive detection frame to touch the driving-in snapshot line.
And starting snapshot operation to acquire a vehicle head image.
Secondly, the snapshot car tail image comprises:
a sequence of images is acquired comprising successive frames of images acquired by a corresponding camera, i.e. a secondary camera.
And sequentially identifying continuous multi-frame images in the image sequence, and taking an image of which the confidence coefficient of the first vehicle tail detection frame BBox is higher than a preset value Conf _ thresh as an initial frame image to ensure the stability of the initial state of the target vehicle. If the confidence of the car tail detection frame BBox is below the predetermined value Conf _ thresh, the possible car tail distance is short, and the shot car tail image is large, which is not beneficial to recognition.
And identifying whether each subsequent frame of image in the image sequence has a corresponding preset target, namely a vehicle tail target, from the initial frame of image. If the vehicle tail target exists, writing the frame image into a vehicle tail queue by 1, and if the vehicle tail target does not exist, writing the frame image into the vehicle tail queue by 0, wherein the total length of the vehicle tail queue is N.
In response to the presence of more than a predetermined proportion of a predetermined class of objects, i.e. vehicle rear objects, in a predetermined number of successive frame images. For example, if the sum of the queue elements of the vehicle tail target exceeds N/2, the id of Obj _ Class of the category is assigned accordingly, and the vehicle tail target is assigned 1. Specifically, N may be 15 frames, and if frame skipping is performed when the image sequence is acquired, the number of the 15 frames of images is about 2 s.
And (4) snapping the target vehicle by using a corresponding preset snapping line, namely snapping the target vehicle by using a driving-out snapping line snap _ line _ out. Specifically, enabling operation may be performed on a corresponding capture line according to the id value of Obj _ Class of the detection target in the current camera view.
And responding to the lower boundary of the vehicle head detection frame which is driven out to touch the driving-in snapshot line.
And starting snapshot operation to obtain the vehicle tail image.
Thirdly, the vehicle body image is captured and shot, and the method comprises the following steps:
a sequence of images is acquired comprising successive frames of images acquired by the corresponding camera, i.e. the side secondary camera.
And sequentially identifying continuous multi-frame images in the image sequence, and taking an image of which the confidence coefficient of the first vehicle body detection frame BBox is higher than a preset value Conf _ thresh as an initial frame image to ensure the stability of the initial state of the target vehicle. If the confidence of the vehicle body detection frame BBox is less than or equal to the predetermined value Conf _ thresh, the vehicle body distance may be short, and the captured vehicle body image may be large, which is not favorable for recognition.
From the initial frame image, whether each frame of subsequent image in the image sequence has a corresponding preset target, namely a vehicle body target, is identified. If the vehicle body target exists, writing the frame image into a vehicle body queue by 1, and if the vehicle body target does not exist, writing the frame image into the vehicle body queue by 0, wherein the total length of the vehicle body queue is N.
In response to the presence of an object of a predetermined class, i.e. a body object, in a predetermined number of successive frame images exceeding a predetermined proportion. For example, if the sum of the queue elements of the vehicle body object exceeds N/2, the id of Obj _ Class of the category is assigned correspondingly, and the vehicle body object is assigned 2. Specifically, N may be 15 frames, and if frame skipping is performed when the image sequence is acquired, the number of the 15 frames of images is about 2 s.
And (4) snapping the target vehicle by using a corresponding preset snapping line, namely snapping the target vehicle by using a transverse snapping line snap _ line _ cross. Specifically, enabling operation may be performed on a corresponding capture line according to the id value of Obj _ Class of the detection target in the current camera view.
Center line abscissa X and lateral snap line X in response to lateral vehicle body detection framesnap_crossThe distance therebetween is less than a preset threshold value deltas. The vehicle body detection frame abscissa x keeps flexible snapshot of certain compartmentalization apart from horizontal snapshot line, and the purpose makes the image that the target of taking a candid photograph is located the more centre in the field of vision better.
And starting the snapshot operation to acquire the vehicle body image.
According to the method, the reliability of the detected target is guaranteed through target confidence judgment based on deep learning target detection of the image sequence, and the accuracy of the output target category is effectively improved by establishing an Obj _ Class queue by multi-frame cache detection target categories. In addition, the multi-camera linkage and vertical and horizontal cooperative snapshot modes are adopted, so that the characteristic information of the target-carrying vehicle can be effectively captured in multiple directions; by detecting the state value [0 is the head of the vehicle, 1 is the tail of the vehicle, and 2 is the body of the vehicle ] of the Class attribute Obj _ Class of the BBox of the target, the automatic enabling configuration of the driving-in and driving-out of the camera and the transverse capturing line is realized.
In one embodiment, identifying the target image to obtain the vehicle information of the target vehicle includes:
recognizing a vehicle head image, and acquiring license plate information of a vehicle head, wherein the license plate information comprises license plate character information; recognizing a vehicle body image, and acquiring vehicle axle number information through the side surface of the vehicle body; the method comprises the steps of recognizing a vehicle tail image, obtaining vehicle license plate information and trailer license plate information of the vehicle tail, wherein the vehicle license plate information comprises character information such as character length, license plate number and the like, and the trailer license plate information indicates whether a vehicle is a trailer or not, and is usually a 'hanging' word position behind the license plate number.
Wherein, discernment automobile body image obtains vehicle axle number information and includes:
in this embodiment, the number of detected wheel targets in the vehicle body image is taken as the number of axles of the target vehicle. In order to ensure the accuracy of the axle number identification result, a post-processing strategy is adopted to filter invalid targets in the detection result of the wheel target. Assuming that the number of wheel targets to be detected is N, the information of each wheel detection frame BBox is represented by the coordinates (X) at the upper left cornerul,Yul) And coordinates of lower right corner (X)lr,Ylr) And (4) forming.
First, in response to the number of the wheel detection frames being three or more, wheel detection frames having a height not within a predetermined height range and a width not within a predetermined width range are removed. The number of remaining wheel detection frames is used as the vehicle axle number information. Therefore, the wheel detection frames which obviously do not belong to too large or too small wheels can be removed, errors are reduced, and the accuracy of the wheel axle number information is improved.
Second, the average width of the wheel detection frame and the average height of the center point are calculated.
The calculation formula of the average width w _ avg of the wheel detection frame and the average height h _ avg of the Y coordinate of the center point of the detection frame is as follows:
Figure 3
Figure 4
responding to the central point distance between two adjacent wheel detection frames being smaller than a preset central point distance k x w _ avg (0 < k < 1); and removing the wheel detection frames with lower confidence coefficient in the two adjacent wheel detection frames. Therefore, wheel detection frames of abnormal wheels with short distance from the normal wheels, such as wheel ghost images caused by too fast speed, can be removed, errors are reduced, and the accuracy of wheel axle number information is improved.
The center point coordinate value in response to the wheel detection frame is less than the predetermined center point coordinate value. And removing the wheel detection frame with the central point coordinate value less than the preset central point coordinate value r x h _ avg (0 < r < 1). Therefore, abnormal wheel detection frames which are too far away from the average central point of the detection frame can be removed, for example, wheel scrawling on a carriage and the like can be avoided, errors can be reduced, and the accuracy of wheel axle number information can be improved.
The number of remaining wheel detection frames is determined as the vehicle axle number information.
By adopting at least one method, errors can be reduced, and the accuracy of the wheel axle number information can be improved. In other embodiments, the two methods can be combined with other methods to reduce errors to the greatest extent, so that the detected information of the number of axles of the vehicle is the same as the number of axles of the real target vehicle.
The method for detecting the vehicle wheel target adopts a deep learning target detection mode, the output side auxiliary camera captures a vehicle wheel target detection result in a vehicle body image, an invalid detection target is filtered through a post-processing strategy, and the accuracy of a vehicle axle number identification result is improved.
Because the vehicle body detection frame and the wheel detection frame both use the coordinates at the upper left corner as the origin, if the vehicle age detection frame needs to be converted into the coordinate system of the vehicle body detection frame, the following method can be adopted:
suppose that the coordinates of the upper left corner and the lower right corner of the side-assisted camera body detection frame BBox are OD _ vehicle (X)ul,Yul) And OD _ vehicle (X)lr,Ylr) The coordinates of the upper left corner and the lower right corner of the wheel detection frame are OD _ wheel (X) respectively1,Y1) And OD _ wheel (X)2,Y2) Upper left coordinate OD _ wheel (X) on wheel inspection frame original1’,Y1') and lower right OD _ wheel (X)2’,Y2') mapping as follows:
Figure RE-GDA0003380934400000101
Figure RE-GDA0003380934400000102
s12: and acquiring the weight limit information of the target vehicle according to the vehicle information.
In one embodiment, the acquiring the weight limit information of the target vehicle according to the vehicle information includes:
and inquiring a prefabricated vehicle weight limit information coding table by utilizing vehicle information, wherein the vehicle information comprises license plate information, the number of vehicle axles and trailer license plate information. And obtaining the weight limit information of the target vehicle according to the vehicle weight limit information coding table.
The vehicle information mainly comprises license plate information (such as license plate recognition characters), vehicle axle number information and trailer number plate information, wherein the axle number and trailer number plate information of a target vehicle are used as input when a load judgment exclusive-or logic module inquires the weight limit of the vehicle. The vehicle weight limit information coding table established according to the number of vehicle axles and the trailer number plate information is as follows:
TABLE 1 encoding table for vehicle weight limit information
Figure RE-GDA0003380934400000103
Figure RE-GDA0003380934400000111
S13: and acquiring load information of the target vehicle, and comparing the load information with the weight limit information to acquire an overload detection result of the target vehicle.
The load information of the target vehicle is obtained through the wagon balance on the ground, a camera group of a main camera, an auxiliary camera and a side auxiliary camera which are usually linked is usually arranged in a logistics scene and a bayonet scene with the wagon balance, the camera group is used for obtaining a target image, and the wagon balance is used for obtaining the load information, so that whether the load vehicle is overloaded or whether the load vehicle can pass through the bayonet is judged.
And acquiring the weight Limit information W _ Limit of the target Vehicle in an exclusive OR (XOR) logic query mode of encoding information Vehicle _ info and a weight Limit List List _ info formed by the axle number information and the trailer number plate information of the target Vehicle. And acquiring a vehicle overload flag bit W _ flag according to the comparison relation between W _ True and W _ Limit, and judging whether the vehicle is overloaded.
Specifically, comparing the load information W _ True with the weight Limit information W _ Limit, and obtaining the overload detection result of the target vehicle includes:
in response to the load information being above the weight limit information; and (4) taking vehicle state coding information consisting of license plate information, vehicle axle number information, trailer number plate information, overload flag bit and load information as an overweight detection result and reporting the overweight detection result. Wherein, the information state of the trailer number plate, 1 represents that the vehicle is a trailer, and 0 represents that the vehicle is not hung; and an overload flag bit, wherein 1 represents that the current vehicle is overloaded and gives an early warning signal, and 0 represents that the load is qualified. Specifically, the vehicle state coded information is as in fig. 4.
According to the method and the device, the axle number information and the trailer number plate information of the target Vehicle are stored in a binary coding mode, and the weight Limit information W _ Limit of the target Vehicle is obtained in an exclusive or logic query mode of Vehicle coding information Vehicle _ info and a weight Limit List List _ info. And acquiring a vehicle overload flag bit W _ flag according to the comparison relation between the load information W _ True and the weight Limit information W _ Limit, judging whether the target vehicle is overloaded, and being convenient and effective. The load logic judgment module outputs the final load state of the cargo vehicle, and reports encoded information consisting of five parts, namely license plate character information, axle number information, trailer number plate information, overload flag bits and load number, so that the state information of a target vehicle is covered in multiple directions.
Referring to fig. 5, fig. 5 is a schematic frame diagram of an embodiment of a vehicle overload detection device according to the present application.
The present application further provides a vehicle overload detection apparatus 20, which includes an obtaining module 21 and a processing module 22. The acquisition module 21 acquires a target image including a target vehicle, identifies the target image, and acquires vehicle information of the target vehicle. The processing module 22 acquires weight limit information of the target vehicle according to the vehicle information; the processing module 22 determines the load information of the target vehicle, compares the load information with the load limit information, and obtains an overload detection result of the target vehicle.
The vehicle overload detection device 20 of the present application obtains the vehicle information of the target vehicle by recognizing the image sequence, and can obtain the weight limit information according to the vehicle information of the target vehicle, compare the weight limit information with the obtained actual load information of the target vehicle, and further can obtain the overload detection result of the target vehicle. The automatic judgment of the overload of the target vehicle is realized, the efficiency is high, the error rate is low, and the traffic environment construction with safe assistance is effective.
Referring to fig. 6, fig. 6 is a schematic diagram of a frame of an embodiment of an electronic device according to the present application.
The present application further provides an electronic device 30, which includes a memory 31 and a processor 32 coupled to each other, wherein the processor 32 is configured to execute program instructions stored in the memory 31 to implement the vehicle overload detection method of any of the above embodiments. In one particular implementation scenario, the electronic device 30 may include, but is not limited to: a microcomputer, a server, and the electronic device 30 may also include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
Specifically, the processor 32 is configured to control itself and the memory 31 to implement the steps in the pedestrian clustering method of any one of the above embodiments. The processor 32 may also be referred to as a CPU (Central Processing Unit). The processor 32 may be an integrated circuit chip having signal processing capabilities. The Processor 32 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 32 may be commonly implemented by an integrated circuit chip.
Referring to fig. 7, fig. 7 is a block diagram illustrating an embodiment of a computer-readable storage medium according to the present application.
Yet another embodiment of the present application provides a computer-readable storage medium 40, on which program data 41 is stored, the program data 41 implementing the vehicle overload detection method of any of the above embodiments when executed by a processor.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium 40. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a readable storage medium 40 and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned readable storage medium 40 includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are intended to be included within the scope of the present application.

Claims (11)

1. A vehicle overload detection method, comprising:
acquiring a target image containing a target vehicle, and identifying the target image to acquire vehicle information of the target vehicle;
acquiring weight limit information of the target vehicle according to the vehicle information;
and determining the load information of the target vehicle, and comparing the load information with the weight limit information to obtain an overload detection result of the target vehicle.
2. The method of claim 1, wherein the obtaining a target image containing a target vehicle, the identifying the target image obtaining vehicle information of the target vehicle, comprises:
acquiring a head image, a body image and a tail image of the target vehicle;
recognizing the vehicle head image and acquiring license plate information;
identifying the vehicle body image and acquiring vehicle axle number information;
and recognizing the vehicle tail image, and acquiring the license plate information and trailer license plate information.
3. The method of claim 2, wherein the obtaining the nose image, the body image, and the tail image of the target vehicle comprises:
capturing the vehicle head image, wherein the vehicle head image is captured by a main camera arranged in a driving-in stage of the running of the target vehicle;
capturing the vehicle body image, wherein the vehicle body image is captured by a side auxiliary camera arranged at a transverse stage of the running of the target vehicle;
capturing the vehicle tail image, wherein the vehicle tail image is captured by a secondary camera arranged in an exit stage of the running of the target vehicle;
wherein, the main camera, the auxiliary camera and the side auxiliary camera are arranged in a linkage manner.
4. The method of claim 2, wherein the capturing the nose image, capturing the tail image, or capturing the body image respectively comprises:
acquiring a sequence of images comprising successive frames of images acquired by corresponding cameras;
sequentially identifying the continuous multi-frame images, and taking the image with the first confidence coefficient higher than a preset value as an initial frame image;
identifying whether each subsequent frame of image in the image sequence has a corresponding preset target from the initial frame of image;
in response to the presence of more than a predetermined proportion of the predetermined class of objects in a predetermined number of successive frame images;
and capturing the target vehicle by using the corresponding preset capturing line.
5. The method according to claim 3, wherein preset snapping lines comprise a drive-in snapping line, a drive-out snapping line and a transverse snapping line, and snapping the driven-in target vehicle by using the corresponding preset snapping lines comprises:
responding to the lower boundary of the driving-in vehicle head detection frame to touch the driving-in snapshot line;
starting a snapshot operation to obtain the locomotive image; alternatively, the first and second electrodes may be,
responding to the lower boundary of the outgoing vehicle tail detection frame to touch the outgoing snapshot line;
starting a snapshot operation to obtain the vehicle tail image; alternatively, the first and second electrodes may be,
responding to the fact that the distance between the horizontal coordinate of the center line of the transverse vehicle body detection frame and the transverse snapshot line is less than a preset threshold value;
and starting snapshot operation to acquire the vehicle body image.
6. The method of claim 1, wherein the identifying the body image and obtaining vehicle axle count information comprises:
obtaining a wheel detection frame of a wheel target in the vehicle body image;
in response to the number of the wheel detection frames being more than three, removing wheel detection frames having a height not within a predetermined height range and a width not within a predetermined width range;
determining the number of remaining wheel detection frames as the vehicle axle number information.
7. The method of claim 1, wherein the identifying the body image and obtaining vehicle axle count information comprises:
determining the average width and the average height of the center point of the wheel detection frame;
responding to the fact that the distance between the center points of two adjacent wheel detection frames is smaller than a preset center point distance;
removing the wheel detection frames with lower confidence level in the two adjacent wheel detection frames;
responding to the central point coordinate value of the wheel detection frame being smaller than a predetermined central point coordinate value;
removing the wheel detection frame with the central point coordinate value smaller than the preset central point coordinate value;
determining the number of remaining wheel detection frames as the vehicle axle number information.
8. The method of claim 1, wherein the obtaining weight limit information of the target vehicle according to the vehicle information comprises:
inquiring a prefabricated vehicle weight limit information coding table by using the vehicle information, wherein the vehicle information comprises license plate information, vehicle axle number information and trailer license plate information;
and obtaining the weight limit information of the target vehicle.
9. The method of claim 8, wherein comparing the load information with the weight limit information to obtain an overload detection result of the target vehicle comprises:
in response to the load information being above the weight limit information;
and using vehicle state coding information consisting of the license plate information, the vehicle axle number information, the trailer license plate information, an overload marker bit and the load information as an overweight detection result.
10. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the method of any of claims 1 to 9.
11. A computer-readable storage medium, on which program data are stored, which program data, when being executed by a processor, carry out the method of any one of claims 1 to 9.
CN202110694113.6A 2021-06-22 2021-06-22 Vehicle overload detection method, device and storage medium Pending CN113850752A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110694113.6A CN113850752A (en) 2021-06-22 2021-06-22 Vehicle overload detection method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110694113.6A CN113850752A (en) 2021-06-22 2021-06-22 Vehicle overload detection method, device and storage medium

Publications (1)

Publication Number Publication Date
CN113850752A true CN113850752A (en) 2021-12-28

Family

ID=78975132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110694113.6A Pending CN113850752A (en) 2021-06-22 2021-06-22 Vehicle overload detection method, device and storage medium

Country Status (1)

Country Link
CN (1) CN113850752A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332827A (en) * 2022-03-10 2022-04-12 浙江大华技术股份有限公司 Vehicle identification method and device, electronic equipment and storage medium
CN114792408A (en) * 2022-06-21 2022-07-26 浙江大华技术股份有限公司 Motor vehicle snapshot method, motor vehicle snapshot device and computer storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332827A (en) * 2022-03-10 2022-04-12 浙江大华技术股份有限公司 Vehicle identification method and device, electronic equipment and storage medium
CN114792408A (en) * 2022-06-21 2022-07-26 浙江大华技术股份有限公司 Motor vehicle snapshot method, motor vehicle snapshot device and computer storage medium

Similar Documents

Publication Publication Date Title
CN113850752A (en) Vehicle overload detection method, device and storage medium
CN111383460B (en) Vehicle state discrimination method and device and computer storage medium
CN110298300B (en) Method for detecting vehicle illegal line pressing
CN112257541A (en) License plate recognition method, electronic device and computer-readable storage medium
CN110232827B (en) Free flow toll collection vehicle type identification method, device and system
CN110667474B (en) General obstacle detection method and device and automatic driving system
CN110838230B (en) Mobile video monitoring method, monitoring center and system
CN111368612A (en) Overman detection system, personnel detection method and electronic equipment
CN112749622B (en) Emergency lane occupation recognition method and device
WO2024046053A1 (en) Vehicle violation detection method, apparatus and system, and storage medium
CN114332707A (en) Method and device for determining equipment effectiveness, storage medium and electronic device
CN112836631A (en) Vehicle axle number determining method and device, electronic equipment and storage medium
CN113469115A (en) Method and apparatus for outputting information
CN110660226A (en) Method, system and equipment for detecting vehicle safety standard and storage device
CN112907972B (en) Road vehicle flow detection method and system based on unmanned aerial vehicle and computer readable storage medium
CN109842800A (en) Big data compression-encoding device
CN116189133B (en) Road inspection judging method and device
CN113487649B (en) Vehicle detection method and device and computer storage medium
CN113743212B (en) Method and device for detecting congestion or carryover at entrance and exit of escalator and storage medium
CN111652143B (en) Vehicle detection method and device and computer storage medium
CN115049992A (en) Logistics monitoring system and method based on big data
CN115578698A (en) Vehicle axle type recognition method, device, electronic device and storage medium
CN113538924B (en) Vehicle snapshot method and device, electronic equipment and computer readable storage medium
CN114078212A (en) Accurate vehicle type identification method and device based on ETC portal
CN111191603B (en) Method and device for identifying people in vehicle, terminal equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination