CN113327192A - Method for measuring and calculating automobile running speed through three-dimensional measurement technology - Google Patents

Method for measuring and calculating automobile running speed through three-dimensional measurement technology Download PDF

Info

Publication number
CN113327192A
CN113327192A CN202110514471.4A CN202110514471A CN113327192A CN 113327192 A CN113327192 A CN 113327192A CN 202110514471 A CN202110514471 A CN 202110514471A CN 113327192 A CN113327192 A CN 113327192A
Authority
CN
China
Prior art keywords
dimensional
vehicle
point
automobile
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110514471.4A
Other languages
Chinese (zh)
Other versions
CN113327192B (en
Inventor
王家奎
李淦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Veilytech Co ltd
Original Assignee
Wuhan Veilytech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Veilytech Co ltd filed Critical Wuhan Veilytech Co ltd
Priority to CN202110514471.4A priority Critical patent/CN113327192B/en
Publication of CN113327192A publication Critical patent/CN113327192A/en
Application granted granted Critical
Publication of CN113327192B publication Critical patent/CN113327192B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T3/08
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention discloses a method for measuring and calculating the running speed of an automobile by a three-dimensional measurement technology, which comprises the following steps: s1: early preparation: preparing a camera, a car, a specific calibration object and a three-dimensional model library containing different types of cars; s2, a key point detection technology; s3, verifying key points; s4, calculating the speed of the automobile, the method is scientific and reasonable in structure, safe and convenient to use, simple and convenient to operate, wide in application range and suitable for being popularized and used better, and the method can be used for quickly acquiring the driving speed of the automobile through camera shooting.

Description

Method for measuring and calculating automobile running speed through three-dimensional measurement technology
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to a method for measuring and calculating the running speed of an automobile by a three-dimensional measurement technology.
Background
The measurement of the running speed of a vehicle is one of hot spots in the field of intelligent transportation, and the current commonly used speed measurement method mainly adopts a hardware device, such as a ground sensing coil for speed measurement, the ground sensing coil is embedded under a road surface at a certain distance, the speed is calculated by using the known distance and the time that the vehicle passes, and the moving speed is calculated by using the time difference of laser reflection, such as laser radar for speed measurement, and then the moving speed is calculated by using the distance difference;
no matter the radar or the ground induction coil is limited by the speed measurement principle, the monitoring is usually only a relatively fixed and small-range area, and the hardware facilities are relatively high in price, so that the facilities are usually only installed at the main intersection, the high-speed bayonet and other heavy points in practice.
Disclosure of Invention
The invention provides a method for measuring and calculating the driving speed of an automobile by a three-dimensional measuring technology, which can effectively solve the problems that no matter a radar or a ground sensing coil is provided in the background technology, due to the limitation of a speed measuring principle, a monitored area is usually a relatively fixed area with a small range, and due to the high price of hardware facilities, the facilities are usually only installed at the positions of important points such as a main intersection, a high-speed bayonet and the like in practice.
In order to achieve the purpose, the invention provides the following technical scheme: a method for measuring and calculating the running speed of an automobile by a three-dimensional measurement technology comprises the following steps:
s1: early preparation: preparing a camera, a car, a specific calibration object and a three-dimensional model library containing different types of cars;
s2, a key point detection technology;
s3, verifying key points;
and S4, calculating the speed of the automobile.
According to the technical scheme, the specific steps of S1 are as follows:
a1, deploying a calibration object on a region to be measured;
a2, taking a picture of the calibration object by using a camera and selecting a fixed point O on the picture as a three-dimensional coordinate origin;
a3, measuring the three-dimensional coordinates of the corner points on the calibration object;
a4, performing corner detection on a calibration object in the picture through a corner detection technology, thereby obtaining two-dimensional coordinates of each corner;
a5, corresponding the two-dimensional coordinates of the calibration object corner points with the three-dimensional coordinates thereof, and calculating an internal parameter matrix, an external parameter matrix and a distortion matrix of the camera by using corresponding calibration technology;
a6, in the automobile three-dimensional model library, taking n specific points on each selected automobile as key points P { P1, P2, P3.., pn }, and measuring the three-dimensional coordinates of each point.
According to the above technical solution, the specific flow of the step in S2 is as follows:
b1, shooting a video of the automobile to be detected running near the point O by using a camera;
b2, screenshot the video to obtain a series of pictures, wherein each picture must contain an O point;
b3, correcting each picture by using the distortion matrix obtained in the previous step;
b4, detecting and classifying each automobile on each picture by using a classification network, and recording the classification;
b5, performing key point detection on each vehicle of each picture by using a key point detection network to obtain a two-dimensional coordinate of each key point;
b6, segmenting the vehicle region by using an image segmentation network to obtain a Mask (Mask) of the vehicle;
b7, matching the two-dimensional coordinates of the key point of each vehicle with the three-dimensional coordinates of the key point of the three-dimensional model under the category through the category of each vehicle obtained in the previous step, and finding out the model with the minimum error, wherein the model is the approximate three-dimensional model of the vehicle;
and B8, combining the model mask image, obtaining the two-dimensional and three-dimensional coordinates of the key points and the internal and external parameter matrixes of the camera, and obtaining the current posture information of the vehicle and the three-dimensional coordinate information of the vehicle by utilizing a posture estimation algorithm.
According to the above technical solution, the specific flow of step S3 is as follows:
c1, estimating whether the key points under the attitude are visible points or not according to the attitude information of the automobile obtained in the previous step;
and C2, evaluating whether the key point information obtained in the previous step is credible according to whether the key point is visible or not, returning to the previous step if the key point information is not credible, and selecting the matching posture only depending on the segmented vehicle mask image if the key point information is not credible.
According to the above technical solution, the specific flow of step S4 is as follows:
d1, identifying each vehicle of each picture and matching all detection frames by using a tracking network to obtain the same vehicle with different pictures;
d2, because the pictures are cut out to include time sequence and the time interval is known, the same vehicle which obtains different pictures can obtain the running track of each vehicle, namely the displacement of the vehicle according to the information;
d3, according to the time interval between pictures and the displacement data, the average speed of the automobile in the video time can be calculated.
According to the above technical solution, the purpose of the calibration in S1 is to find the internal parameter (K), the external parameter (M) and the distortion parameter of the camera, and since the distortion degree of each lens is different, the calibration is required to obtain the relevant parameters of a specific lens to correct the shot picture, and the corrected result directly affects the precision of the subsequent work;
the specific process of calibration is as follows:
calibrating board angular point detection, which directly influences the precision of the acquired parameters to obtain the principle of an angular point c, if c is an ideal angular point position, p is a point in the neighborhood, and the gradient vector of the point is gpThen can satisfy
Figure BDA0003060743650000041
In practice, the recognition is not so accurate, so that in a candidate point c', the following formula needs to be satisfied:
Figure BDA0003060743650000042
the three-dimensional model library is also mentioned in the step, and three-dimensional reconstruction and other operations can be carried out through network collection or real vehicles to obtain a large number of three-dimensional models of vehicles, wherein the three-dimensional models should contain models of common vehicle types, and each category can have models of different vehicle types of various brands, the more comprehensive the model library is, the more accurate the finally obtained speed information is,
according to the technical scheme, the three neural networks mentioned in the step S2 are a classification network and a key point detection network, respectively, the key point position information obtained in the step is two-dimensional coordinate information, and the three-dimensional coordinate information corresponds to the three-dimensional coordinate information of the corresponding point so that the posture estimation of the next step can be performed;
the attitude of the attitude estimation algorithm is a series of three-dimensional information of the current target in a three-dimensional coordinate, and the three-dimensional rotation angle (pitch angle, azimuth angle and roll angle) information of the current target;
an external parameter matrix M of the camera obtained in the previous step is also needed; the specific process is as follows:
selecting n key points, such as 2 corners of a front window of a vehicle, 2 corners of a rear window, 2 corners of an engine, wheels and other specific points, measuring three-dimensional coordinates of the selected points and corresponding to two-dimensional coordinates of visible points on a picture photographed by the camera;
the projection error can be defined as the following equation:
Figure BDA0003060743650000051
wherein the theta vector is the attitude matrix of the object, n is the number of the key points,
Figure BDA0003060743650000052
is the two-dimensional point coordinate of the ith keypoint,
Figure BDA0003060743650000053
is the ith offThe three-dimensional coordinates of the keypoints are in two-dimensional coordinates projected by the camera,
Figure BDA0003060743650000054
for corresponding three-dimensional point coordinates, LproIs the envelope of the projection of the corresponding three-dimensional model in the image, L is the envelope of the vehicle mask obtained by dividing the network, H (-) is the Hausdorff distance,
Figure BDA0003060743650000055
and attitude parameters are as follows:
Figure BDA0003060743650000056
where K is the camera projection matrix, M is the pose matrix, θrIs a rotation vector, the order of which is: z → Y → X, thetatFor a translation vector, taking the error formula of the previous step as a loss function, selecting an optimizer Adam to iterate the loss function, and after a period of iteration, obtaining an attitude matrix θ which minimizes an error J, that is:
θ*=argmin(J(θ))。
according to the above technical solution, in the step S3, considering that the types of vehicles are various, the models of vehicles are similar, and the data obtained through the network needs to be verified to a certain extent, the method adopted in the step S is that, because the posture of the vehicle is presented in the picture, not all key points are all visible points, the model posture obtained through the point detected by the key point is used for reversely deducing whether the point is visible, and if the two are contradictory, whether the key point is credible can be judged.
According to the above technical solution, in S4, a convolutional neural network is trained through a deep learning technique, the network is divided into three parts, one part is a detection network, and a corresponding target detection frame can be found in a picture, in this part, the detection frame information of a vehicle can be provided with the detection frame position of the vehicle together when the classification is given by the above-mentioned classification network; the second part is a segmentation network and is used for extracting the vehicle mask in the detection frame of the first part; the third part is a matching network, the image features extracted by the first two networks are utilized, and because the relevance between different objects is higher than that of a uniform object, the same vehicle in different images can be found out by setting a threshold value.
Compared with the prior art, the invention has the beneficial effects that: the method for shooting the driving speed of the automobile by the camera is simple and convenient to operate, wide in application range and suitable for being popularized and used better.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic overall flow diagram of an embodiment of the present invention;
FIG. 2 is a schematic illustration of a calibration object;
FIG. 3 is a schematic view of a three-dimensional model of a vehicle;
FIG. 4 is a schematic diagram of key points of an automobile;
FIG. 5 is a schematic view of a vehicle mask spot;
FIG. 6 is a schematic view of the classification of the car inspection box;
FIG. 7 is a schematic diagram of vehicle keypoint estimation and error;
FIG. 8 is a schematic diagram of auto mask estimation and error;
FIG. 9 is a schematic view of an automobile attitude estimation;
FIG. 10 is a schematic view of the displacement and velocity of a vehicle.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Example (b): as shown in fig. 1, the invention provides a technical solution, a method for measuring and calculating a driving speed of an automobile by a three-dimensional measurement technology, comprising the following steps:
obtaining a picture of an automobile A through a monitoring camera A, wherein the picture needs to comprise a calibration object, the automobile A and a three-dimensional coordinate origin O;
collecting different types of automobile three-dimensional models through a network, and zooming the models to the actual size of the automobile;
selecting key points of the automobile, namely 4 corners of a front windshield, 4 corners of a rear windshield, a left rear-view mirror, a right rear-view mirror, a left outermost edge and a right outermost edge of a head of the automobile and two outermost edges of a parking space, wherein a schematic diagram of the key points is shown in FIG. 4;
detecting a calibration object and calculating the internal and external parameters and distortion parameters of the camera by a calibration technology;
obtaining driving videos of the automobile B to be detected and the automobile C to be detected, wherein the driving videos are 1 minute long, and intercepting the videos once every 10 seconds to obtain 6 pictures;
respectively carrying out distortion correction on the pictures;
respectively passing the 6 pictures through a neural network model to obtain the key point positions, masks and classification results of each automobile, and referring to the figures 4,5,6,7 and 8;
respectively passing the 6 pictures through a neural network model to obtain the key point positions, masks and classification results of each automobile, and referring to the figures 4,5,6,7 and 8;
calculating the attitude of the current model according to the acquired camera parameters, the two-dimensional information of the key points, the three-dimensional key point information of the vehicle and the three-dimensional model of the vehicle by using an attitude estimation algorithm, wherein an attitude schematic diagram is shown in FIG. 9;
respectively inputting the detection frames on the 6 pictures into a detection frame matching network model according to the video time sequence for matching, screening according to the matching degree, and setting the threshold value to be 0.5;
calculating the displacement of the automobile between every two images according to the matching result and the three-dimensional coordinates of the automobile of each detection frame, wherein the screenshot interval is 5 seconds, the average speed of the automobile in the period of time can be calculated according to the displacement and the time, and the displacement schematic diagram is shown in FIG. 10;
and 5 average speeds of each vehicle are obtained according to the 6 pictures, and the speed of the vehicle to be detected in the whole video time can be calculated.
Step 1, pasting the checkerboard grids on the carton, measuring the three-dimensional coordinates of each angular point, and using an aruco method provided by an OpenCV library for a calibration method, wherein the detection schematic diagram of a calibration object is shown in FIG. 2;
in the 4 th step of the complaint step, the key points are selected to be points which have certain characteristics and are preferably all in the same type of vehicles;
step 6, distortion correction adopts undistort method of OpenCV library;
step 7 of the above steps, the classification network and the key point detection network in the process are merged, the network detects the key points while detecting the vehicle, and the detection can be realized through a cascadeRcnn network, and the method is realized by adopting a detectron2 framework; in the training process, a large number of vehicle pictures and key points are required to be marked;
in the 8 th step and the 9 th step of the steps, the verification of the key point needs to calculate the attitude estimation for each possible model once, and then whether the key point is feasible or not is reversely evaluated according to the attitude information;
step 10, detecting matching of frames, namely adopting a matching part model Re-Id of a multi-object tracking (MOT) network model, after a period of training, enabling the network model to give a matching degree between input detection frames, selecting a corresponding frame with the highest matching degree larger than 0.5 as the corresponding frame, and obtaining different positions of a vehicle in different pictures by combining with the three-dimensional coordinates of the vehicle obtained in the previous step to obtain the displacement of the vehicle in interval time;
in the step 11, the displacement of the vehicle between the two pictures is calculated, the sequence cannot be confused because the screenshot interval is known and fixed, otherwise, the calculation speed is very inaccurate, and the shorter the screenshot interval time is, the higher the accuracy of the speed is.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A method for measuring and calculating the running speed of an automobile by a three-dimensional measurement technology is characterized in that: the method comprises the following steps:
s1: early preparation: preparing a camera, a car, a specific calibration object and a three-dimensional model library containing different types of cars;
s2, a key point detection technology;
s3, verifying key points;
and S4, calculating the speed of the automobile.
2. The method for measuring and calculating the driving speed of a vehicle according to claim 1, wherein the step of S1 is as follows:
a1, deploying a calibration object on a region to be measured;
a2, taking a picture of the calibration object by using a camera and selecting a fixed point O on the picture as a three-dimensional coordinate origin;
a3, measuring the three-dimensional coordinates of the corner points on the calibration object;
a4, performing corner detection on a calibration object in the picture through a corner detection technology, thereby obtaining two-dimensional coordinates of each corner;
a5, corresponding the two-dimensional coordinates of the calibration object corner points with the three-dimensional coordinates thereof, and calculating an internal parameter matrix, an external parameter matrix and a distortion matrix of the camera by using corresponding calibration technology;
a6, selecting n specific points on each automobile as key points P { P1, P2, P3,. multidot.pn } in an automobile three-dimensional model base, and measuring the three-dimensional coordinates of each point.
3. The method for measuring and calculating the driving speed of a vehicle according to claim 1, wherein the step of S2 is as follows:
b1, shooting a video of the automobile to be detected running near the point O by using a camera;
b2, screenshot the video to obtain a series of pictures, wherein each picture must contain an O point;
b3, correcting each picture by using the distortion matrix obtained in the previous step;
b4, detecting and classifying each automobile on each picture by using a classification network, and recording the classification;
b5, performing key point detection on each vehicle of each picture by using a key point detection network to obtain a two-dimensional coordinate of each key point;
b6, segmenting the vehicle region by using an image segmentation network to obtain a Mask (Mask) of the vehicle;
b7, matching the two-dimensional coordinates of the key points of each vehicle with the three-dimensional coordinates of the key points of the three-dimensional model in the category through the category of each vehicle obtained in the previous step, and finding out a model with the minimum error, wherein the model is the approximate three-dimensional model of the vehicle;
and B8, combining the model mask image, obtaining the two-dimensional and three-dimensional coordinates of the key points and the internal and external parameter matrixes of the camera, and obtaining the current posture information of the vehicle and the three-dimensional coordinate information of the vehicle by utilizing a posture estimation algorithm.
4. The method for measuring and calculating the driving speed of a vehicle according to claim 1, wherein the step of S3 is as follows:
c1, estimating whether the key points under the attitude are visible points or not according to the attitude information of the automobile obtained in the previous step;
and C2, evaluating whether the key point information obtained in the previous step is credible according to whether the key point is visible or not, returning to the previous step if the key point information is not credible, and selecting the matching posture only depending on the segmented vehicle mask image if the key point information is not credible.
5. The method for measuring and calculating the driving speed of a vehicle according to claim 1, wherein the step of S4 is as follows:
d1, identifying each vehicle of each picture by using a tracking network and matching all detection frames to obtain the same vehicle with different pictures;
d2, because the pictures are cut out to include time sequence and the time interval is known, the same vehicle with different pictures can obtain the running track of each vehicle, namely the displacement of the vehicle according to the information;
d3, according to the time interval between pictures and the displacement data, the average speed of the automobile in the video time can be calculated.
6. The method for measuring and calculating the driving speed of an automobile through the three-dimensional measurement technology as claimed in claim 2, wherein the calibration in S1 is performed to obtain the intrinsic parameters (K), the extrinsic parameters (M) and the distortion parameters of the camera, since the distortion degree of each lens is different, the calibration is performed to obtain the relevant parameters of a specific lens to correct the picture taken by the lens, and the corrected result directly affects the precision of the subsequent work;
the specific process of calibration is as follows:
calibrating the board angular point detection, wherein the detection directly influences the precision of the acquired parameters to obtain the principle of the angular point c, if c is the ideal angular point position, p is a point in the neighborhood, and the gradient vector of the point is gpThen can satisfy
Figure FDA0003060743640000031
In practice, the recognition is not so accurate, so that in a candidate point c', the following formula needs to be satisfied:
Figure FDA0003060743640000032
the three-dimensional model library is also mentioned in the step, and can be used for obtaining a large number of three-dimensional models of vehicles through network collection or operation such as three-dimensional reconstruction of real vehicles, wherein the three-dimensional models comprise models of common vehicle types, and each category can be provided with models of different vehicle types of various brands, so that the more comprehensive the model library is, the more accurate the finally obtained speed information is.
7. The method for measuring and calculating the driving speed of an automobile through the three-dimensional measurement technology as claimed in claim 1, wherein three neural networks, namely a classification network and a key point detection network, are mentioned in S2, the key point position information obtained in the step is two-dimensional coordinate information, and the two-dimensional coordinate information is corresponding to the three-dimensional coordinates of the corresponding point so as to perform the attitude estimation of the next step;
the attitude of the attitude estimation algorithm is a series of three-dimensional information of the current target in a three-dimensional coordinate, and the three-dimensional rotation angle (pitch angle, azimuth angle and roll angle) information of the current target;
an external parameter matrix M of the camera obtained in the previous step is also needed; the specific process is as follows:
selecting n key points, such as 2 corners of a front window of a vehicle, 2 corners of a rear window, 2 corners of an engine, wheels and other specific points, measuring three-dimensional coordinates of the selected points and corresponding to two-dimensional coordinates of visible points on a picture photographed by a camera;
the projection error can be defined as the following equation:
Figure FDA0003060743640000041
wherein the theta vector is the attitude matrix of the object, n is the number of the key points,
Figure FDA0003060743640000042
is the two-dimensional point coordinate of the ith keypoint,
Figure FDA0003060743640000043
is the two-dimensional coordinate projected by the camera of the three-dimensional coordinate of the ith key point,
Figure FDA0003060743640000044
for corresponding three-dimensional point coordinates, LproIs the envelope of the projection of the corresponding three-dimensional model in the image, L is the envelope of the vehicle mask obtained by dividing the network, H (-) is the Hausdorff distance,
Figure FDA0003060743640000045
and attitude parameters are as follows:
Figure FDA0003060743640000051
where K is the camera projection matrix, M is the pose matrix, θrIs a rotation vector, the order of which is: z → Y → X, thetatFor a translation vector, taking the error formula of the previous step as a loss function, selecting an optimizer Adam to iterate the loss function, and after a period of iteration, obtaining an attitude matrix θ which minimizes an error J, that is:
θ*=argmin(J(θ))。
8. the method for measuring and calculating the driving speed of an automobile through the three-dimensional measurement technology as claimed in claim 1, wherein in step S3, considering that the types of the automobiles are various, the models of the automobiles are similar, and the data obtained through the network needs to be verified to a certain extent, the method adopted in the step S is that all key points are not visible points when the postures of the automobiles are presented in the picture, and the model postures obtained through the detected key points are used for reversely deducing whether the key points are visible or not, and if the two are contradictory, whether the key points are credible or not can be judged.
9. The method for measuring and calculating the driving speed of a vehicle through the three-dimensional measurement technology as claimed in claim 1, wherein the convolutional neural network is trained through the deep learning technology in S4, the network is divided into three parts, one part is a detection network, a corresponding target detection frame can be found in the picture, in this part, the detection frame information of the vehicle can be provided together with the detection frame position of the vehicle when the classification network mentioned above gives the classification; the second part is a segmentation network and is used for extracting the vehicle mask in the detection frame of the first part; the third part is a matching network, image features extracted by the first two networks are utilized, and because the relevance between different objects is higher than that of a uniform object, the same vehicle in different pictures can be found out by setting a threshold value.
CN202110514471.4A 2021-05-11 2021-05-11 Method for measuring and calculating automobile running speed through three-dimensional measurement technology Active CN113327192B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110514471.4A CN113327192B (en) 2021-05-11 2021-05-11 Method for measuring and calculating automobile running speed through three-dimensional measurement technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110514471.4A CN113327192B (en) 2021-05-11 2021-05-11 Method for measuring and calculating automobile running speed through three-dimensional measurement technology

Publications (2)

Publication Number Publication Date
CN113327192A true CN113327192A (en) 2021-08-31
CN113327192B CN113327192B (en) 2022-07-08

Family

ID=77415442

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110514471.4A Active CN113327192B (en) 2021-05-11 2021-05-11 Method for measuring and calculating automobile running speed through three-dimensional measurement technology

Country Status (1)

Country Link
CN (1) CN113327192B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762177A (en) * 2021-09-13 2021-12-07 成都市谛视科技有限公司 Real-time human body 3D posture estimation method and device, computer equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008146114A2 (en) * 2007-06-01 2008-12-04 Toyota Jidosha Kabushiki Kaisha Measurement device, measurement method, program, and computer readable medium
CN101777261A (en) * 2009-03-25 2010-07-14 长春理工大学 Method for measuring vehicle speed based on CMOS digital camera with belt-type shutter
US20140336848A1 (en) * 2013-05-10 2014-11-13 Palo Alto Research Center Incorporated System and method for detecting, tracking and estimating the speed of vehicles from a mobile platform
CN111079675A (en) * 2019-12-23 2020-04-28 武汉唯理科技有限公司 Driving behavior analysis method based on target detection and target tracking
CN111126161A (en) * 2019-11-28 2020-05-08 北京联合大学 3D vehicle detection method based on key point regression
WO2021004312A1 (en) * 2019-07-08 2021-01-14 中原工学院 Intelligent vehicle trajectory measurement method based on binocular stereo vision system
CN112464889A (en) * 2020-12-14 2021-03-09 刘啟平 Road vehicle attitude and motion information detection method
CN112489126A (en) * 2020-12-10 2021-03-12 浙江商汤科技开发有限公司 Vehicle key point information detection method, vehicle control method and device and vehicle

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008146114A2 (en) * 2007-06-01 2008-12-04 Toyota Jidosha Kabushiki Kaisha Measurement device, measurement method, program, and computer readable medium
CN101777261A (en) * 2009-03-25 2010-07-14 长春理工大学 Method for measuring vehicle speed based on CMOS digital camera with belt-type shutter
US20140336848A1 (en) * 2013-05-10 2014-11-13 Palo Alto Research Center Incorporated System and method for detecting, tracking and estimating the speed of vehicles from a mobile platform
WO2021004312A1 (en) * 2019-07-08 2021-01-14 中原工学院 Intelligent vehicle trajectory measurement method based on binocular stereo vision system
CN111126161A (en) * 2019-11-28 2020-05-08 北京联合大学 3D vehicle detection method based on key point regression
CN111079675A (en) * 2019-12-23 2020-04-28 武汉唯理科技有限公司 Driving behavior analysis method based on target detection and target tracking
CN112489126A (en) * 2020-12-10 2021-03-12 浙江商汤科技开发有限公司 Vehicle key point information detection method, vehicle control method and device and vehicle
CN112464889A (en) * 2020-12-14 2021-03-09 刘啟平 Road vehicle attitude and motion information detection method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762177A (en) * 2021-09-13 2021-12-07 成都市谛视科技有限公司 Real-time human body 3D posture estimation method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113327192B (en) 2022-07-08

Similar Documents

Publication Publication Date Title
CN111462135B (en) Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation
US10580164B2 (en) Automatic camera calibration
US8212812B2 (en) Active shape model for vehicle modeling and re-identification
CN107230218B (en) Method and apparatus for generating confidence measures for estimates derived from images captured by vehicle-mounted cameras
CN111448478B (en) System and method for correcting high-definition maps based on obstacle detection
Zielke et al. Intensity and edge-based symmetry detection with an application to car-following
EP3739545A1 (en) Image processing method and apparatus, vehicle-mounted head up display system, and vehicle
CN113936198B (en) Low-beam laser radar and camera fusion method, storage medium and device
CN111340855A (en) Road moving target detection method based on track prediction
CN111738032B (en) Vehicle driving information determination method and device and vehicle-mounted terminal
CN103810475A (en) Target object recognition method and apparatus
CN113327192B (en) Method for measuring and calculating automobile running speed through three-dimensional measurement technology
CN115327572A (en) Method for detecting obstacle in front of vehicle
CN110176022B (en) Tunnel panoramic monitoring system and method based on video detection
CN113029185A (en) Road marking change detection method and system in crowdsourcing type high-precision map updating
Meuter et al. 3D traffic sign tracking using a particle filter
CN111353481A (en) Road obstacle identification method based on laser point cloud and video image
CN116643291A (en) SLAM method for removing dynamic targets by combining vision and laser radar
CN116385997A (en) Vehicle-mounted obstacle accurate sensing method, system and storage medium
CN107292932B (en) Head-on video speed measurement method based on image expansion rate
CN115235493A (en) Method and device for automatic driving positioning based on vector map
Tummala et al. SmartDashCam: automatic live calibration for DashCams
CN114926332A (en) Unmanned aerial vehicle panoramic image splicing method based on unmanned aerial vehicle mother vehicle
CN111539279A (en) Road height limit height detection method, device, equipment and storage medium
JP7383584B2 (en) Information processing devices, information processing methods, programs, and vehicle control systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant