CN111950478B - Method for detecting S-shaped driving behavior of automobile in weighing area of dynamic flat-plate scale - Google Patents

Method for detecting S-shaped driving behavior of automobile in weighing area of dynamic flat-plate scale Download PDF

Info

Publication number
CN111950478B
CN111950478B CN202010825107.5A CN202010825107A CN111950478B CN 111950478 B CN111950478 B CN 111950478B CN 202010825107 A CN202010825107 A CN 202010825107A CN 111950478 B CN111950478 B CN 111950478B
Authority
CN
China
Prior art keywords
flat
edge
plate scale
scale
wheel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010825107.5A
Other languages
Chinese (zh)
Other versions
CN111950478A (en
Inventor
冯骥良
姜俊
祝顺飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dong Ding Electronic Ltd By Share Ltd
Original Assignee
Zhejiang Dong Ding Electronic Ltd By Share Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dong Ding Electronic Ltd By Share Ltd filed Critical Zhejiang Dong Ding Electronic Ltd By Share Ltd
Priority to CN202010825107.5A priority Critical patent/CN111950478B/en
Publication of CN111950478A publication Critical patent/CN111950478A/en
Application granted granted Critical
Publication of CN111950478B publication Critical patent/CN111950478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method for detecting S-shaped driving behaviors of an automobile in a weighing area of a dynamic flat-plate scale. The method comprises the following steps: acquiring a flat scale area image by using a camera on one side of a road, and inputting the image into a contour anomaly detection neural network for analysis; when the contour is normal, inputting the area image of the flat-plate scale into a flat-plate scale perception neural network to obtain a semantic segmentation map of the flat-plate scale; carrying out edge perception on the semantic segmentation graph of the flat plate scale by utilizing the edge profile graph and the edge perception neural network of the flat plate scale to obtain an edge point distribution graph of the flat plate scale; performing linear fitting on the edge points, and performing perspective transformation to obtain a top view of the edge line of the flat-plate scale; detecting wheel touchdown points in the acquired automobile images; and transforming the detected wheel landing point coordinates and the flat plate scale edge line top view into the same coordinate system, and judging whether the automobile has S-shaped driving behaviors or not according to the position relation between the wheel landing point and the flat plate scale edge line. By utilizing the method and the device, the S-shaped driving behavior of the weighed automobile can be accurately identified.

Description

Method for detecting S-shaped driving behavior of automobile in weighing area of dynamic flat-plate scale
Technical Field
The invention relates to the field of artificial intelligence and automobile dynamic weighing, in particular to a method for detecting S-shaped driving behaviors of an automobile in a weighing area of a dynamic flat-plate scale.
Background
The automobile dynamic weighing means that the automobile can be directly driven from the flat-plate scale without stopping, and the weight of the automobile can be obtained. The driver can not accurately observe the position of the automobile on the electronic scale due to visual errors, and when the driver uses the point to weigh on the narrow-strip flat-plate scale, the automobile turns a S-shaped curve and runs an S-shaped route, so that one wheel of the automobile is positioned on the road surface, and only one wheel is positioned on the flat-plate scale, thereby reducing the actual weight of the automobile during weighing. Some methods detect the position of the automobile through an infrared transmitter to judge whether the automobile has S-shaped driving behaviors, and the infrared transmitter is easily influenced by the environment, low in detection precision and difficult to maintain.
Disclosure of Invention
The invention provides a dynamic flat-plate scale weighing area automobile S-shaped driving behavior detection method, which is used for detecting S-shaped driving cheating behaviors of an automobile during dynamic weighing.
A method for detecting S-shaped driving behaviors of an automobile in a weighing area of a dynamic flat-plate scale comprises the following steps:
step 1, paving a narrow-strip flat-plate scale which is level with the ground on a road, and arranging cameras on two sides of the road respectively;
step 2, carrying out edge extraction on the image of the flat-plate scale area collected by the camera on one side of the road to obtain an edge contour map;
step 3, inputting the edge profile into a profile anomaly detection neural network, extracting features through a profile anomaly detection encoder to obtain a first feature map, analyzing the first feature map by a classification module, outputting a detection result of whether the profile is abnormal, returning to the step 2 to continue profile detection if the profile is abnormal, and taking a flat-plate scale area image corresponding to the edge profile to transfer to the step 4 to analyze if the profile is normal;
step 4, inputting the image of the flat-plate scale area into a flat-plate scale perception neural network for analysis, extracting features through a flat-plate scale perception encoder, sampling and restoring the extracted features through a flat-plate scale perception decoder, and outputting a flat-plate scale semantic segmentation graph for distinguishing the semantics of the flat-plate scale, the road and other irrelevant elements;
step 5, inputting the semantic segmentation graph of the flat-plate scale into a flat-plate scale edge perception neural network, extracting features through an edge perception encoder to obtain a second feature graph, combining the second feature graph with the first feature graph to obtain a third feature graph, carrying out weighted classification on the third feature graph through a plurality of fully-connected networks, and splicing and integrating the outputs of the plurality of fully-connected networks to obtain a flat-plate scale edge point distribution graph;
step 6, performing linear fitting on edge points on the edge point distribution diagram of the flat-plate scale, and performing perspective transformation to obtain a top view of the edge line of the flat-plate scale;
step 7, collecting automobile images by using cameras on two sides of a road, and detecting wheel touchdown points in the collected automobile images;
and 8, transforming the detected wheel landing point coordinates and the flat-plate scale edge line top view into the same coordinate system, and judging whether the automobile has S-shaped driving behaviors or not according to the position relation between the wheel landing point and the flat-plate scale edge line.
Further, the performing straight line fitting on the edge points on the edge point distribution graph of the flat-plate scale comprises:
setting a distribution diagram of edge points of the flat-plate scale to comprise 2N edge points, wherein N is an integer, and executing the following steps:
step a, randomly taking N edge points to perform straight line fitting to obtain a straight line L1
Step b, calculating the selected N edge points to the straight line L1If the sum of the distances is less than a first threshold value, then L is determined1If the straight line is an effective straight line, turning to the step c, otherwise, returning to the step a;
c, utilizing the residual N edge points to perform straight line fitting on the straight line L2Calculating N edge points to the straight line L2If the sum of the distancesLess than the first threshold, then L is determined2Is an effective straight line; otherwise, returning to the step a;
step d, calculating L1And L2If the difference is smaller than the second threshold, the two obtained straight lines are used as edge lines of the flat-plate scale, otherwise, the step a is returned.
Further, the optical axis of the cameras on the two sides of the road is parallel to the edge line of the flat-plate scale.
Further, the method also includes training the flat-plate scale edge-aware neural network to:
constructing a training set of the semantic segmentation graph of the flat plate scale, and dividing the semantic segmentation graph of the flat plate scale into a plurality of semantic segmentation subgraphs along the road direction;
the edge perception encoder and each full-connection network form a flat plate scale edge perception branch, and the label data of each branch is confidence degree labels of edge points of two flat plate scale edge lines which are vertical to the road direction on the semantic segmentation subgraph;
inputting the semantic segmentation map training set of the flat-plate scale and the label data into a flat-plate scale edge perception neural network, training the semantic segmentation map training set and the label data by adopting a cross entropy loss function, and outputting an edge point confidence map of a corresponding flat-plate scale edge line by each flat-plate scale edge perception branch.
Further, the width of the edge point of the edge line of the flat-plate scale is 1 pixel.
Further, the method also includes training the flat panel scale-aware neural network to:
constructing a weighing area, taking an image containing the flat-plate scale as a sample set, and respectively labeling roads, the flat-plate scale and other irrelevant elements on the sample set image;
inputting the image data and the labeled data of the sample set into a flat-plate scale perception neural network, carrying out feature extraction on the image by a flat-plate scale perception encoder, outputting a feature map, carrying out up-sampling reduction on the feature map by a flat-plate scale perception decoder, outputting a flat-plate scale semantic segmentation map, and training by adopting a cross entropy loss function.
Further, the detecting of the wheel touchdown point in the acquired automobile image is specifically:
detecting a wheel touchdown point based on a wheel key point detection neural network; wherein, wheel key point detects neural network includes:
the wheel key point detection encoder is used for encoding the image and extracting the characteristics to obtain a wheel landing point characteristic diagram;
and the wheel key point detection decoder is used for carrying out up-sampling reduction on the wheel landing point characteristic diagram to obtain a wheel landing point thermodynamic diagram.
Further, the method also includes training the wheel keypoint detection neural network to:
acquiring an automobile image to construct a sample set, carrying out wheel landing place labeling on the image in the sample set, and carrying out Gaussian kernel convolution processing on a labeled key point diagram to obtain wheel landing place thermodynamic diagram label data;
and training a wheel key point detection encoder and a wheel key point detection decoder end to end by adopting a mean square error loss function.
Further, the method further comprises:
the vertical coordinates of two edge lines of the flat-plate scale under the same coordinate system are respectively y0、y1,y0<y1The coordinates of the landing points of the left wheel and the right wheel on the same bearing of the automobile are respectively yL、yrIf y is0<yL<y1And y is0<yr<y1When the automobile is weighed, the wheels are all positioned on the flat-plate scale, and normal weighing is judged;
if y0<yL<y1、yr>y1Or y is0<yL<y1、yr<y0Or y is0<yr<y1、yL>y1Or y is0<yr<y1、yL<y0The S-shaped cheating of the automobile is judged.
The invention has the beneficial effects that:
1. the invention analyzes the image collected by the camera to obtain the relationship between the edge line position of the flat-plate scale and the wheel landing position, and judges whether the automobile has S-shaped behavior.
2. According to the invention, the contour anomaly detection neural network is designed to firstly perform contour analysis on the image acquired by the camera, and when the contour is normal, namely, no shielding factors such as automobiles and pedestrians exist, the subsequent flat-plate scale edge sensing is performed, so that on one hand, invalid edge sensing can be avoided, on the other hand, the image acquired by the camera does not need to be screened manually, and the intelligence degree and convenience of the method are improved.
3. The flat-plate scale edge line detection method is based on the deep learning technology to design the flat-plate scale perception neural network and the flat-plate scale edge perception neural network to detect the flat-plate scale edge line, on one hand, the flat-plate scale area does not need to be calibrated manually, and the flat-plate scale edge information can be obtained automatically under various working conditions; on the other hand, the flat-plate scale edge perception neural network adopts a plurality of branch structures to perceive the flat-plate scale edge, and combines the characteristics of the edge contour map, compared with a method of obtaining the edge only by semantic segmentation, the edge of the flat-plate scale can be more accurately detected.
4. According to the invention, the corresponding straight line fitting method is designed in consideration of the particularity of the flat-plate scale, so that the accuracy of the edge line fitting of the flat-plate scale is improved.
5. According to the invention, the cameras on two sides of the road are used for collecting the automobile images to detect the wheel landing positions, and compared with the method that the positions of the automobile such as the license plate and the automobile center point are used for determining the automobile position, the moving track of the automobile on the flat-plate scale can be accurately reflected, and the detection accuracy rate of the S-shaped driving behavior is improved. Moreover, the wheel key point detection neural network can not only improve the wheel touchdown point detection efficiency, but also improve the detection accuracy.
6. The S-shaped driving behavior of the vehicle is judged according to the coordinate position relation between the automobile tire and the edge of the flat-plate scale, the whole driving path of the automobile does not need to be analyzed, the detection result can be obtained only through a few frames of images, and the detection efficiency of the S-shaped driving behavior is improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a straight line comparison of the weight of the platform balance during normal weighing of the vehicle and during S-shaped driving of the vehicle; FIG. 2(a) is a schematic view of a flat-bed weighing scale weighing straight line during normal weighing of an automobile; FIG. 2(b) is a schematic view of a flat plate scale weight line when the automobile is running in S shape;
FIG. 3 is a schematic view of a dynamic weighing area scene;
FIG. 4 is a block diagram of a platform balance;
FIG. 5 is a schematic diagram of the edge of the platform balance and the position of the car tire in the same coordinate system.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a dynamic flat-plate scale weighing area automobile S-shaped driving behavior detection method. The main purpose is to adopt cameras on two sides of a flat-plate scale to monitor the cheating behaviors of wheel postures when an automobile is weighed to turn into an S-shaped curve and walk on an S-shaped route in real time. The flow chart of the invention is shown in figure 1. The following description will be made by way of specific examples.
The pair of straight-line changes of the weighing capacity of the flat plate when the automobile is normally weighed and when the automobile runs in an S-shaped mode is shown in figure 2. In FIG. 2(a), the car is normally weighed to obtain the axle weight m. In fig. 2(b), the car is in S-shaped driving, only one wheel of the car is driven on the platform scale before the time t0, the other wheel of the car is driven on the platform scale at the time t0 to t1, and both wheels are on the platform scale at the time t1 to t2, so that the weighing result is inaccurate, and the axle weight m ', m' < m is obtained. Therefore, S-shaped driving affects weighing results, and therefore, a method capable of identifying S-shaped driving behaviors of the automobile is necessary.
Example 1:
the method for detecting the S-shaped driving behavior of the automobile in the weighing area of the dynamic flat-plate scale comprises the following steps:
step 1, paving a narrow-strip flat-plate scale which is level with the ground on a road, and arranging cameras on two sides of the road respectively.
The S-shaped driving behavior detection of the automobile in the weighing area of the dynamic flat-plate scale is characterized in that firstly, a narrow-strip flat-plate scale which is level with the ground is paved on a road, and cameras are respectively arranged on two sides of the road. Fig. 3 is a schematic view of a dynamic weighing area scene. The optical axis of the cameras on the two sides of the road should be parallel to the edge line of the flat-plate scale.
The present embodiment employs a flat-bed scale configuration as shown in fig. 4. The flat plate type dynamic truck scale adopts a resistance strain sensor, and a dynamic weighing device for detecting the axle load working principle by detecting the micro-deformation of a bedplate main beam through the sensor. This micro-deformation is recoverable within the elastic range of the rigid beam that allows deformation. The working principle, the force measuring structure and the appearance of the device are completely different from the traditional bending plate type and weighing platform type dynamic weighing devices. The equipment is installed seamlessly, and is completely integrated with the road, so that the vehicle can not sink and rock when passing, such as walking on the flat ground. The method is suitable for high-speed, medium-speed and low-speed vehicle passing detection.
And 2, carrying out edge extraction on the image of the flat-plate scale area acquired by the camera on one side of the road to obtain an edge profile map.
In order to obtain richer feature information, for an image acquired by a camera, an edge contour map of the image is obtained by adopting an edge extraction machine vision method, the map and an original image are equal in size, an implementer can select the edge extraction method according to the situation, and the sobel operator and the canny operator are suggested to be adopted to process the image. Then, the obtained edge contour map is subjected to image classification.
And 3, inputting the edge profile into a profile anomaly detection neural network, extracting features through a profile anomaly detection encoder to obtain a first feature map, analyzing the first feature map by a classification module, outputting a detection result of whether the profile is abnormal, returning to the step 2 to continue profile detection if the profile is abnormal, and taking a flat-plate scale area image corresponding to the edge profile to transfer to the step 4 to analyze if the profile is normal. In this embodiment, the classification module uses a full connection layer.
The training process of the contour anomaly detection neural network is as follows: the training set selects an edge contour map obtained by acquiring an image through edge detection by a multi-frame camera, and the edge contour map is marked as image categories, specifically two categories, namely a normal contour and an abnormal contour, wherein the abnormal contour mainly refers to the fact that uncontrollable factors such as vehicles, pedestrians and the like shield a weighing platform line. The loss function is a cross-entropy loss function. The network input is an edge profile graph, the profile anomaly detection encoder extracts features and outputs a first feature graph, the first feature graph is sent to a full connection layer after being subjected to Flatten operation, and an image classification result is output by a classification function. When the type is the profile abnormity, subsequent weighing platform straight line fitting is not carried out; and when the type is the profile is normal, performing subsequent weighing platform straight line fitting. The main purpose of step 3 is to reduce the calculation amount, on one hand, invalid edge perception can be avoided, and the fitting of an incorrect flat-plate scale straight line is prevented; on the other hand, images collected by the camera do not need to be screened manually, and the intelligence degree and convenience of the method are improved.
And 4, inputting the image of the flat-plate scale region into a flat-plate scale perception neural network for analysis, extracting features through a flat-plate scale perception encoder, sampling and restoring the extracted features through a flat-plate scale perception decoder, and outputting a flat-plate scale semantic segmentation graph for distinguishing the semantics of the flat-plate scale, the road and other irrelevant elements.
The training content of the flat-plate scale perception neural network is as follows: and selecting an image containing the flat-plate scale in the weighing area as a training data set, labeling the data set, wherein the road label is 1, the flat-plate scale label is 2, and the other labels are 3. Where 80% of the data set was randomly selected as the training set and the remaining 20% as the validation set. Inputting image data and label data into a flat-plate scale perception neural network, carrying out feature extraction by a flat-plate scale perception encoder, and outputting a flat-plate scale perception feature map; and then, carrying out up-sampling reconstruction on the sensing characteristic graph of the flat plate scale through a flat plate scale sensing decoder to obtain a semantic segmentation graph of the flat plate scale, wherein the semantic segmentation graph of the flat plate scale is the same as the original graph in size. The loss function is trained using a cross entropy loss function. The flat panel scale-aware neural network segments the neural network based on semantics. It should be noted that, the encoder and the decoder in the neural network are characterized in that the encoder expands the channels of the feature map, and the image size is reduced, that is, the accuracy of the spatial domain is reduced and the type number of the feature intensity is increased, whereas, the decoder reduces the number of the channels and increases the spatial domain accuracy of the feature map. The encoder, decoder may include a plurality of convolution modules (i.e., CNN Block) or ResBlock. Between the convolution modules or ResBlock, the implementer can set pooling, sampling layers as needed. The implementer should make trade-offs in combination with the computational power and accuracy of the hardware to adopt the appropriate encoder and decoder internal structure.
And 5, inputting the semantic segmentation graph of the flat-plate scale into a flat-plate scale edge perception neural network, extracting features through an edge perception encoder to obtain a second feature graph, combining the second feature graph with the first feature graph to obtain a third feature graph, performing weighted classification on the third feature graph through a plurality of fully-connected networks, and splicing and integrating the outputs of the plurality of fully-connected networks to obtain a flat-plate scale edge point distribution graph.
The edge-aware neural network of the flat-plate scale needs to be trained before being put into use: and constructing a sample set of the semantic segmentation map of the flat-plate scale, wherein the semantic segmentation map of the flat-plate scale comprises the flat-plate scale and a road. 80% of the sample set was randomly selected as the training set and the remaining 20% as the validation set.
And dividing the semantic segmentation graph of the flat plate scale into a plurality of semantic segmentation subgraphs along the road direction. The implementer can divide the images according to actual conditions, and the division number can be smaller than the number of longitudinal pixels of the images, or the division number can be the same, namely the longitudinal direction is perpendicular to the road. The edge perception encoder and each full-connection network form a flat plate scale edge perception branch, the label data of each branch is confidence degree labeling of edge point positions of two flat plate scale edge lines (namely front and rear edge lines, the front and rear are determined relative to the automobile driving direction) perpendicular to the road direction on the semantic segmentation subgraph, specifically, edge point pixels of the front and rear edge lines of the flat plate scale in the semantic segmentation subgraph are labeled as 1, and other labels are labeled as 0. The two edge lines are symmetrically marked, so that an implementer can not mark all pixels of the edge lines, and the marking workload can be reduced.
Inputting a semantic segmentation image training set of the flat-plate scale and label data into a flat-plate scale edge perception neural network, wherein an edge perception encoder performs downsampling feature extraction on a semantic segmentation image to obtain a second feature image, combines the second feature image with the first feature image to obtain a third feature image, and expands the third feature image to be sent into a plurality of fully-connected networks to perform flat-plate scale edge point extraction. The number of the full-connection network is equal to that of the semantic segmentation subgraphs. And each flat-plate scale edge perception branch outputs an edge point position confidence map of the corresponding flat-plate scale edge line. The edge point position can be obtained using argmax. And training the edge perception neural network of the flat-plate scale by adopting a cross entropy loss function. The output of the full-connection networks is sequentially arranged in the direction vertical to the front edge line and the rear edge line of the flat-plate scale, and the output of the full-connection networks is spliced and integrated to obtain the distribution diagram of the edge points of the flat-plate scale.
And 6, performing linear fitting on the edge points on the edge point distribution diagram of the flat-plate scale, and performing perspective transformation to obtain a top view of the edge line of the flat-plate scale.
In the embodiment, a corresponding straight line fitting method is designed in consideration of the particularity of the flat-plate scale, and straight line fitting is performed on the obtained edge points to obtain accurate straight lines of the front edge and the rear edge of the flat-plate scale. The distribution diagram of the edge points of the flat-plate scale is the distribution of the edge points of the front edge line and the rear edge line, so the edge points are even numbers. Setting a distribution diagram of edge points of the flat-plate scale to comprise 2N edge points, wherein N is an integer, and executing the following steps:
step a, randomly taking N edge points from 2N edge points to perform straight line fitting to obtain a straight line L1
Step b, calculating the selected N edge points to the straight line L1If the sum of the distances is less than a first threshold value, then L is determined1And c, if the straight line is a valid straight line, returning to the step a, otherwise, returning to the step a.
C, utilizing the residual N edge points to perform straight line fitting on the straight line L2Calculating N edge points to the straight line L2If the sum of the distances is less than a first threshold value, then L is determined2Is an effective straight line; otherwise, returning to the step a. Fitting the remaining N edge points to form a second straight line by a least square method, similarly calculating the distances from the N edge points to the straight line, and judging as an effective straight line L when the sum of the distances is less than a first threshold value2(ii) a And if the sum of the distances is larger than or equal to the threshold value, returning to the step a.
Step d, calculating L1And L2If the difference is smaller than the second threshold, the two obtained straight lines are used as edge lines of the flat-plate scale, otherwise, the step a is returned. Due to the parallel nature of the scale, the slope of the line fitted should be the same, so that when a valid line L is obtained1And an effective straight line L2For the linear slope comparison, a threshold value M is set, and the value of M should be set very small and only used to overcome a slight error. When the difference of the slopes is smaller than M, namely the difference of the slopes is smaller than M, judging that the two effective straight lines are parallel to meet the particularity of the scene and using the two effective straight lines as a straight line fitting result; when the difference of the slopes is larger than or equal to M, namely the difference of the slopes is larger, the two straight lines are judged to be unparallel and not meet the property of the parallel edges of the weighing platform, and the step a is returned.
In order to reflect the positional relationship between the flat-bed scale and the automobile more accurately, perspective transformation is required to obtain a top view of the edge line of the flat-bed scale. The perspective transformation method is well known, and a homography matrix is calculated based on a four-point method, and the top view transformation is completed according to the homography matrix.
And 7, acquiring automobile images by using cameras on two sides of the road, and detecting the wheel landing positions in the acquired automobile images.
The invention adopts the cameras at both sides of the road to collect the automobile images when the automobile is weighed. The left camera and the right camera on the two sides of the road collect images of the weighed automobile, the images are sent to a wheel key point detection neural network to obtain key points of the left wheel and the right wheel, and key point coordinates are obtained through post-processing. The cameras on the two sides are used for shooting the side faces of the automobile and obtaining key points of contact positions of wheels of the automobile and the ground. The position can ensure that the error generated in projection is minimum. The wheel key point detection neural network comprises: the wheel key point detection encoder is used for encoding the image and extracting the characteristics to obtain a wheel landing point characteristic diagram; and the wheel key point detection decoder is used for carrying out up-sampling reduction on the wheel landing point characteristic diagram to obtain a wheel landing point thermodynamic diagram.
The wheel keypoint detection neural network needs to be trained. First, a sample set of images is constructed. And after the sample set is constructed, marking the data, and taking the contact point of the wheel and the ground as a key point. The marking process is divided into two steps, firstly, marking key points of the wheel in the image, namely coordinates of X and Y; and secondly, convolving the marked key point diagram with a Gaussian kernel to form a wheel footprint thermodynamic diagram label. The wheel keypoint detection encoder, the wheel keypoint detection decoder, is then trained, wherein the image data and the label data are normalized for faster convergence of the model. And training a wheel key point detection encoder and a wheel key point detection decoder end to end through the acquired image sample set and the labeled thermodynamic diagram label data. The wheel key point detection encoder performs feature extraction on the image, inputs the image data subjected to normalization processing, and outputs the image data as a wheel landing thermodynamic diagram; the wheel key point detection decoder performs up-sampling and feature extraction on the wheel landing point thermodynamic diagram, and finally generates a wheel landing point thermodynamic diagram (heatmap), wherein a Loss function of mean square error is adopted by a Loss function of Loss of the wheel landing point. Finally, the thermodynamic diagram output by the neural network is subjected to post-processing, and the positions of the key points of the wheels can be obtained. Post-processing methods of thermodynamic diagrams are known, i.e. the regression of the coordinates of the key points is performed by finding local maxima, for example using NMS or softargmax.
The encoder and decoder for detecting the key points of the wheel recommend to adopt a structure of an hourglass neural network and combine with block design of lightweight neural networks such as GhostNet, MobileNet V3 and ThunderNet, so as to detect the key points of the wheel more quickly and accurately. The implementer can also use the pre-training neural networks such as Hourglass and HRNet to perform the key point detection of the wheel, and can converge more quickly.
And 8, transforming the detected wheel landing point coordinates and the flat-plate scale edge line top view into the same coordinate system, and judging whether the automobile has S-shaped driving behaviors or not according to the position relation between the wheel landing point and the flat-plate scale edge line.
The invention needs to observe the relationship between the wheels and the edge of the flat-plate scale in the same plane, and judges whether the S-bend cheating action exists in the automobile weighing. However, since the left and right wheels of the automobile are photographed by two cameras, the coordinates thereof are not in the same coordinate, and it is necessary to transform them to the same plane, i.e., the same coordinate system.
Because the key points of the left and the right wheels are not in the same image, in the actual situation, the automobile is three-dimensional, if the images shot by the left and the right cameras are spliced directly, the spliced automobile is distorted, and accurate wheel information cannot be obtained, so that the image splicing method cannot be directly adopted. The invention does not directly use image splicing, key point coordinates of the contact between the left wheel and the right wheel and the ground are obtained by a wheel key point detection neural network, the two key points are used as projection targets, and the two key points are projected on a plane where the edge line of the flat-plate scale is located. The embodiment solves the homography matrix required by the image projection transformation by using a RANSAC method. And then projecting all key points of the left wheel and the right wheel to a plane where edge lines of the flat-plate scale are located through the homography matrix. There are many methods for solving the perspective transformation and the transformation matrix, which are not in the discussion range of the present invention and are not listed.
After all projections, the key points of the left wheel and the right wheel and the edge line of the flat-plate balance are all located under the same coordinate of the same plane. The relationship between the key point coordinates of the left wheel and the right wheel and the upper edge and the lower edge of the flat-plate scale can be visually presented under the same coordinate. The schematic diagram of the edge of the flat-bed scale and the position of the automobile tire under the same coordinate system is shown in FIG. 5, and the vertical coordinates of the upper and lower edge lines of the flat-bed scale are y0、y1,y0<y1. For the ordinate y of the left and right wheelsL、yrIf there is y0<yL<y1And y is0<yr<y1When the automobile is weighed, the wheels are all on the flat-plate scale, and the automobile can be weighed normally; if y is0<yL<y1And also has yr<y0Or yr>y1When the automobile is weighed, the front right wheel is not on the flat-plate scale, and the automobile weighing has cheating behavior of walking an S-shaped route; same principle of y0<yr<y1And also has yL>y1Or yL<y0When the automobile is weighed, the left front wheel is not positioned on the flat-plate scale, so that the S-shaped curve cheating behavior exists in the automobile weighing process.
The above embodiments are merely preferred embodiments of the present invention, which should not be construed as limiting the present invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (9)

1. A method for detecting S-shaped driving behaviors of an automobile in a weighing area of a dynamic flat-plate scale is characterized by comprising the following steps:
step 1, paving a narrow-strip flat-plate scale which is level with the ground on a road, and arranging cameras on two sides of the road respectively;
step 2, carrying out edge extraction on the image of the flat-plate scale area collected by the camera on one side of the road to obtain an edge contour map;
step 3, inputting the edge profile into a profile anomaly detection neural network, extracting features through a profile anomaly detection encoder to obtain a first feature map, analyzing the first feature map by a classification module, outputting a detection result of whether the profile is abnormal, returning to the step 2 to continue profile detection if the profile is abnormal, and taking a flat-plate scale area image corresponding to the edge profile to transfer to the step 4 to analyze if the profile is normal;
step 4, inputting the image of the flat-plate scale area into a flat-plate scale perception neural network for analysis, extracting features through a flat-plate scale perception encoder, sampling and restoring the extracted features through a flat-plate scale perception decoder, and outputting a flat-plate scale semantic segmentation graph for distinguishing the semantics of the flat-plate scale, the road and other irrelevant elements;
step 5, inputting the semantic segmentation graph of the flat-plate scale into a flat-plate scale edge perception neural network, extracting features through an edge perception encoder to obtain a second feature graph, combining the second feature graph with the first feature graph to obtain a third feature graph, carrying out weighted classification on the third feature graph through a plurality of fully-connected networks, and splicing and integrating the outputs of the plurality of fully-connected networks to obtain a flat-plate scale edge point distribution graph;
step 6, performing linear fitting on edge points on the edge point distribution diagram of the flat-plate scale, and performing perspective transformation to obtain a top view of the edge line of the flat-plate scale;
step 7, collecting automobile images by using cameras on two sides of a road, and detecting wheel touchdown points in the collected automobile images;
and 8, transforming the detected wheel landing point coordinates and the flat-plate scale edge line top view into the same coordinate system, and judging whether the automobile has S-shaped driving behaviors or not according to the position relation between the wheel landing point and the flat-plate scale edge line.
2. The method of claim 1, wherein said straight line fitting edge points on a flat panel scale edge point profile comprises:
setting a distribution diagram of edge points of the flat-plate scale to comprise 2N edge points, wherein N is an integer, and executing the following steps:
step a, randomly taking N edge points to perform straight line fitting to obtain a straight line L1
Step b, calculating the selected N edge points to the straight line L1If the sum of the distances is less than a first threshold value, then L is determined1If the straight line is an effective straight line, turning to the step c, otherwise, returning to the step a;
c, utilizing the residual N edge points to perform straight line fitting on the straight line L2Calculating N edge points to the straight line L2If the sum of the distances is less than a first threshold value, then L is determined2Is an effective straight line; otherwise, returning to the step a;
step d, calculating L1And L2If the difference is smaller than the second threshold, the two obtained straight lines are used as edge lines of the flat-plate scale, otherwise, the step a is returned.
3. The method of claim 1, wherein the optical axes of the cameras on both sides of the roadway are parallel to the edge line of the flat-bed scale.
4. The method of claim 1, further comprising training the flat-bed scale edge-aware neural network to:
constructing a training set of the semantic segmentation graph of the flat plate scale, and dividing the semantic segmentation graph of the flat plate scale into a plurality of semantic segmentation subgraphs along the road direction;
the edge perception encoder and each full-connection network form a flat plate scale edge perception branch, and the label data of each branch is confidence degree labels of edge points of two flat plate scale edge lines which are vertical to the road direction on the semantic segmentation subgraph;
inputting the semantic segmentation map training set of the flat-plate scale and the label data into a flat-plate scale edge perception neural network, training the semantic segmentation map training set and the label data by adopting a cross entropy loss function, and outputting an edge point confidence map of a corresponding flat-plate scale edge line by each flat-plate scale edge perception branch.
5. The method according to claim 4, characterized in that the width of the edge points of the edge line of the flat-bed scale is in particular 1 pixel.
6. The method of claim 1, further comprising training the flat panel scale-aware neural network to:
constructing a weighing area, taking an image containing the flat-plate scale as a sample set, and respectively labeling roads, the flat-plate scale and other irrelevant elements on the sample set image;
inputting the image data and the labeled data of the sample set into a flat-plate scale perception neural network, carrying out feature extraction on the image by a flat-plate scale perception encoder, outputting a feature map, carrying out up-sampling reduction on the feature map by a flat-plate scale perception decoder, outputting a flat-plate scale semantic segmentation map, and training by adopting a cross entropy loss function.
7. The method of claim 1, wherein detecting wheel touchdown points in the captured vehicle image is embodied as:
detecting a wheel touchdown point based on a wheel key point detection neural network; wherein, wheel key point detects neural network includes:
the wheel key point detection encoder is used for encoding the image and extracting the characteristics to obtain a wheel landing point characteristic diagram;
and the wheel key point detection decoder is used for carrying out up-sampling reduction on the wheel landing point characteristic diagram to obtain a wheel landing point thermodynamic diagram.
8. The method of claim 7, further comprising training the wheel keypoint detection neural network to:
acquiring an automobile image to construct a sample set, carrying out wheel landing place labeling on the image in the sample set, and carrying out Gaussian kernel convolution processing on a labeled key point diagram to obtain wheel landing place thermodynamic diagram label data;
and training a wheel key point detection encoder and a wheel key point detection decoder end to end by adopting a mean square error loss function.
9. The method of claim 1, further comprising:
the vertical coordinates of two edge lines of the flat-plate scale under the same coordinate system are respectively y0、y1,y0<y1The coordinates of the landing points of the left wheel and the right wheel on the same bearing of the automobile are respectively yL、yrIf y is0<yL<y1And y is0<yr<y1When the automobile is weighed, the wheels are all positioned on the flat-plate scale, and normal weighing is judged;
if y0<yL<y1、yr>y1Or y is0<yL<y1、yr<y0Or y is0<yr<y1、yL>y1Or y is0<yr<y1、yL<y0The S-shaped cheating of the automobile is judged.
CN202010825107.5A 2020-08-17 2020-08-17 Method for detecting S-shaped driving behavior of automobile in weighing area of dynamic flat-plate scale Active CN111950478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010825107.5A CN111950478B (en) 2020-08-17 2020-08-17 Method for detecting S-shaped driving behavior of automobile in weighing area of dynamic flat-plate scale

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010825107.5A CN111950478B (en) 2020-08-17 2020-08-17 Method for detecting S-shaped driving behavior of automobile in weighing area of dynamic flat-plate scale

Publications (2)

Publication Number Publication Date
CN111950478A CN111950478A (en) 2020-11-17
CN111950478B true CN111950478B (en) 2021-07-23

Family

ID=73343292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010825107.5A Active CN111950478B (en) 2020-08-17 2020-08-17 Method for detecting S-shaped driving behavior of automobile in weighing area of dynamic flat-plate scale

Country Status (1)

Country Link
CN (1) CN111950478B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114018379B (en) * 2021-10-30 2022-08-23 浙江东鼎电子股份有限公司 Dynamic weighing angular difference compensation method based on computer vision
CN114332437B (en) * 2021-12-28 2022-10-18 埃洛克航空科技(北京)有限公司 Vehicle area repair method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704862A (en) * 2017-11-06 2018-02-16 深圳市唯特视科技有限公司 A kind of video picture segmentation method based on semantic instance partitioning algorithm
CN110008932A (en) * 2019-04-17 2019-07-12 四川九洲视讯科技有限责任公司 A kind of vehicle violation crimping detection method based on computer vision
CN110348384A (en) * 2019-07-12 2019-10-18 沈阳理工大学 A kind of Small object vehicle attribute recognition methods based on Fusion Features
CN110428522A (en) * 2019-07-24 2019-11-08 青岛联合创智科技有限公司 A kind of intelligent safety and defence system of wisdom new city
CN110490032A (en) * 2018-05-15 2019-11-22 武汉小狮科技有限公司 A kind of road environment cognitive method and device based on deep learning
CN111127429A (en) * 2019-12-24 2020-05-08 魏志康 Water conservancy system pipe thread defect detection method based on self-training deep neural network
CN111242015A (en) * 2020-01-10 2020-06-05 同济大学 Method for predicting driving danger scene based on motion contour semantic graph
CN111439259A (en) * 2020-03-23 2020-07-24 成都睿芯行科技有限公司 Agricultural garden scene lane deviation early warning control method and system based on end-to-end convolutional neural network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109726627B (en) * 2018-09-29 2021-03-23 初速度(苏州)科技有限公司 Neural network model training and universal ground wire detection method
CN111310582A (en) * 2020-01-19 2020-06-19 北京航空航天大学 Turbulence degradation image semantic segmentation method based on boundary perception and counterstudy

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704862A (en) * 2017-11-06 2018-02-16 深圳市唯特视科技有限公司 A kind of video picture segmentation method based on semantic instance partitioning algorithm
CN110490032A (en) * 2018-05-15 2019-11-22 武汉小狮科技有限公司 A kind of road environment cognitive method and device based on deep learning
CN110008932A (en) * 2019-04-17 2019-07-12 四川九洲视讯科技有限责任公司 A kind of vehicle violation crimping detection method based on computer vision
CN110348384A (en) * 2019-07-12 2019-10-18 沈阳理工大学 A kind of Small object vehicle attribute recognition methods based on Fusion Features
CN110428522A (en) * 2019-07-24 2019-11-08 青岛联合创智科技有限公司 A kind of intelligent safety and defence system of wisdom new city
CN111127429A (en) * 2019-12-24 2020-05-08 魏志康 Water conservancy system pipe thread defect detection method based on self-training deep neural network
CN111242015A (en) * 2020-01-10 2020-06-05 同济大学 Method for predicting driving danger scene based on motion contour semantic graph
CN111439259A (en) * 2020-03-23 2020-07-24 成都睿芯行科技有限公司 Agricultural garden scene lane deviation early warning control method and system based on end-to-end convolutional neural network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Densely Based Multi-Scale and Multi-Modal Fully Convolutional Networks for High-Resolution Remote-Sensing Image Semantic Segmentation;Cheng Peng等;《IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 》;20190409;第12卷(第8期);第2612-2626页 *
Efficient Rail Area Detection Using Convolutional Neural Network;Zhangyu Wang等;《IEEE Access》;20181228;第6卷;第77656-77664页 *
Implementation of semantic segmentation for road and lane detection on an autonomous ground vehicle with LIDAR;Kai Li Lim等;《2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)》;20171211;第429-434页 *
基于全卷积递归网络的弱小目标检测方法;杨其利等;《光学学报》;20200731;第40卷(第13期);第1310002-1到1310002-13页 *
多类别的边缘感知方法在图像分割中的应用;董子昊等;《计算机辅助设计与图形学学报》;20190731;第31卷(第7期);第1075-1085页 *
智能汽车个性化辅助驾驶策略研究;蒋渊德;《中国博士学位论文全文数据库 工程科技II辑》;20200215;第2020年卷(第2期);C035-15 *

Also Published As

Publication number Publication date
CN111950478A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
CN106919915B (en) Map road marking and road quality acquisition device and method based on ADAS system
Luo et al. Edge‐enhanced matching for gradient‐based computer vision displacement measurement
Kim et al. Robust lane detection based on convolutional neural network and random sample consensus
JP6670071B2 (en) Vehicle image recognition system and corresponding method
CN110619279B (en) Road traffic sign instance segmentation method based on tracking
CN106682586A (en) Method for real-time lane line detection based on vision under complex lighting conditions
CN111563469A (en) Method and device for identifying irregular parking behaviors
Shim et al. Road surface damage detection based on hierarchical architecture using lightweight auto-encoder network
CN111291676A (en) Lane line detection method and device based on laser radar point cloud and camera image fusion and chip
CN111179300A (en) Method, apparatus, system, device and storage medium for obstacle detection
CN111950478B (en) Method for detecting S-shaped driving behavior of automobile in weighing area of dynamic flat-plate scale
Zhu et al. An accurate approach for obtaining spatiotemporal information of vehicle loads on bridges based on 3D bounding box reconstruction with computer vision
CN102982304A (en) Method and system used for detecting vehicle positions by means of polarized images
CN110991264A (en) Front vehicle detection method and device
Demonceaux et al. Obstacle detection in a road scene based on motion analysis
CN111046723B (en) Lane line detection method based on deep learning
Kotha et al. Potsense: Pothole detection on Indian roads using smartphone sensors
Wu et al. Design and implementation of vehicle speed estimation using road marking-based perspective transformation
CN114663859A (en) Sensitive and accurate complex road condition lane deviation real-time early warning system
Xuan et al. Robust lane-mark extraction for autonomous driving under complex real conditions
CN105825215A (en) Instrument positioning method based on local neighbor embedded kernel function and carrier of method
Liu et al. Research on security of key algorithms in intelligent driving system
Li et al. Road Damage Evaluation via Stereo Camera and Deep Learning Neural Network
CN116740657A (en) Target detection and ranging method based on similar triangles
CN110942024A (en) Unmanned vehicle curb detection method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant