CN112489106A - Video-based vehicle size measuring method and device, terminal and storage medium - Google Patents

Video-based vehicle size measuring method and device, terminal and storage medium Download PDF

Info

Publication number
CN112489106A
CN112489106A CN202011423390.5A CN202011423390A CN112489106A CN 112489106 A CN112489106 A CN 112489106A CN 202011423390 A CN202011423390 A CN 202011423390A CN 112489106 A CN112489106 A CN 112489106A
Authority
CN
China
Prior art keywords
boundary
vehicle
point
image
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011423390.5A
Other languages
Chinese (zh)
Inventor
曹泉
何小晨
洪梓杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hagong Communication Electronics Co ltd
Original Assignee
Shenzhen Hagong Communication Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Hagong Communication Electronics Co ltd filed Critical Shenzhen Hagong Communication Electronics Co ltd
Priority to CN202011423390.5A priority Critical patent/CN112489106A/en
Publication of CN112489106A publication Critical patent/CN112489106A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to the technical field of traffic monitoring, and provides a video-based vehicle size measuring method, a video-based vehicle size measuring device, a video-based vehicle size measuring terminal and a video-based vehicle size measuring storage medium, which are used for solving the problem of vehicle size measurement. The invention provides a video-based vehicle size measuring method, which comprises the following steps: acquiring an image of a video scene; establishing a first coordinate system in the video scene, wherein the first coordinate system is a three-dimensional coordinate system; acquiring calibration data of a camera, and establishing a coordinate mapping relation between a first coordinate system and a point on an image; acquiring a foreground binary image and a foreground mask of the vehicle to be detected according to the image; and acquiring the intersection point of the boundary line of the foreground mask, acquiring the coordinate of the intersection point according to the calibration data, and acquiring the size of the vehicle according to the coordinate of the intersection point. The size of the vehicle can be measured through the video image, and the cost of vehicle size detection is greatly reduced.

Description

Video-based vehicle size measuring method and device, terminal and storage medium
Technical Field
The invention relates to the technical field of traffic monitoring, in particular to a vehicle size measuring method based on videos.
Background
In the field of video monitoring, the length, width and height information of vehicles is applied in many aspects, including ultra-high and ultra-high detection of vehicles, automatic toll collection systems based on vehicle type classification and the like, which all need accurate information of the length, width and height of the vehicles. Most of the conventional vehicle length, width and height measuring systems mostly use laser radar or infrared cameras to measure.
For example, patent "vehicle length, width and height measuring device based on laser ranging technology" (patent publication No. CN 201620122554.3) discloses a vehicle length, width and height measuring device based on laser ranging technology, which can accurately obtain the length, width and height information of a vehicle in an express way, but the costs of laser ranging equipment and installation are high. In a patent of a method for measuring the length, width and height of a vehicle based on an RGB-D camera (patent publication No. CN 201810298243.6), a method for measuring the length, width and height of the vehicle based on the RGB-D camera is disclosed, a clear vehicle image is obtained by adopting a camera close-range installation mode, and three-dimensional point cloud conversion of a vehicle target under a world coordinate system is realized through a camera depth image scheme. The above solutions can only measure the short distance of a single lane vehicle. The acquisition of the length, width and height information of the vehicle through the monocular camera has been a difficult problem, especially in the case of remote measurement. In a patent 3D vehicle target detection method based on monocular vision and geometric constraint (patent number: CN 201910684070.6), a 3D vehicle target detection method based on monocular vision and geometric constraint is disclosed, a vehicle observation angle is obtained through a convolutional neural network, a translation distance value of the center of the bottom surface of a vehicle in the Y-axis direction of a vehicle-mounted monocular camera and a translation distance value in the X-axis direction are obtained according to a table lookup and a camera coordinate system of the vehicle-mounted camera, and therefore a three-dimensional boundary frame of the vehicle is drawn. However, the patent does not show the process of camera calibration and the method of making the coordinate lookup table.
Disclosure of Invention
The invention solves the technical problem of vehicle size measurement and provides a video-based vehicle size measurement method.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
a video-based vehicle sizing method, comprising:
acquiring an image of a video scene; establishing a first coordinate system in the video scene, wherein the first coordinate system is a three-dimensional coordinate system;
acquiring calibration data of the camera, and establishing a coordinate mapping relation between a first coordinate system and points on an image;
acquiring a foreground binary image and a foreground mask of the vehicle to be detected according to the image;
and acquiring the intersection point of the boundary line of the foreground mask, acquiring the coordinate of the intersection point according to the calibration data, and acquiring the size of the vehicle according to the coordinate of the intersection point.
The image of the vehicle to be measured is binarized to obtain a foreground mask, the intersection point of the boundary line of the foreground mask can represent the size of the vehicle, and the coordinates of the intersection point in a video scene can be calculated from the image coordinates of the intersection point through the mapping relation between the first coordinate system and the image, so that the size of the vehicle can be calculated.
The size of the vehicle can be measured through the video image, the cost of vehicle size detection is greatly reduced, a plurality of vehicles in the image can be detected simultaneously, and the detection efficiency is greatly improved.
Preferably, in the first coordinate system: the X coordinate is along the road driving direction on the ground, the Y coordinate is orthogonal to the road driving direction on the ground, and the Z coordinate is vertical to the ground direction; the origin of the first coordinate system is the central point of the image, and the optical axis of the camera passes through the origin of the first coordinate system. The first coordinate system is established in such a way, which is beneficial to mapping points on the image shot by the camera into coordinates in the first coordinate system according to the camera data obtained after the camera calibration.
Preferably, the method of acquiring the intersection of the boundary lines of the foreground mask includes:
acquiring a calibration template adopted in the process of calibrating the camera, and acquiring at least two vanishing points based on the calibration template;
establishing connecting lines of all points on the boundary line of the vehicle foreground mask and a vanishing point, wherein in all the connecting lines, the connecting line closest to the bottom of the vehicle is taken as a bottom boundary, and the connecting line closest to the top of the vehicle is taken as a top boundary;
establishing connecting lines of all points on a boundary line of a vehicle foreground mask and another vanishing point, wherein in all the connecting lines, the connecting line closest to the rear part of the vehicle is taken as a rear boundary, and the connecting line closest to the front part of the vehicle is taken as a front boundary;
the intersection of the bottom boundary and the front boundary is a first point, and the intersection of the top boundary and the rear boundary is a second point. The boundary of the foreground mask is determined, then the intersection point of the boundaries is obtained, and the distance between the first point and the second point can be used as the reference of the vehicle size.
Preferably, the method of acquiring the intersection of the boundary lines of the foreground mask further includes:
establishing vertical lines of all points on a boundary line of a vehicle foreground mask, and taking the leftmost line as a left boundary and the rightmost line as a right boundary in all the vertical lines;
a third point of intersection of the bottom boundary and the left boundary; the intersection point of the front boundary and the right boundary is a fourth point; the intersection point of the top boundary and the right boundary is a fifth point; and the intersection point of the rear boundary and the left boundary is a sixth point. The left and right boundaries of the foreground mask are determined, coordinates of intersection points of all the boundaries on the foreground mask can be obtained, and therefore the size of the vehicle is calculated according to the coordinates of the intersection points.
Preferably, the method for obtaining the coordinates of the intersection point according to the calibration data includes:
and acquiring the coordinates of the intersection points with the height being not zero, which are the same as the coordinates of the intersection point X, Y with the height being zero, according to the coordinates of the intersection points with the height being zero.
Preferably, the calibration data includes a rotation matrix R, a translation matrix T and an internal parameter matrix K;
Figure 809891DEST_PATH_IMAGE001
;
Figure 825252DEST_PATH_IMAGE002
;
Figure 998744DEST_PATH_IMAGE003
;
wherein the content of the first and second substances,
Figure 9425DEST_PATH_IMAGE004
;
Figure 977381DEST_PATH_IMAGE005
;
Figure 604672DEST_PATH_IMAGE006
;
f is the focal length and refers to the distance from the image plane to the center of the camera lens along the optical axis;
h is the height of the camera, and refers to the vertical height from the center of the camera lens to a plane with the height of 0;
t is a pitch angle and refers to a vertical included angle of the optical axis of the camera relative to a plane with the height of 0 in the first coordinate system;
p is a rotation angle and refers to a horizontal included angle from a coordinate axis which is parallel to the traveling direction in the first coordinate system to a plane projection line of the optical axis of the camera with the height of 0 in the anticlockwise direction;
s is the rotation angle, which refers to the rotation angle of the camera along its optical axis;
Figure 316276DEST_PATH_IMAGE007
is an aspect ratio;
Figure 181464DEST_PATH_IMAGE008
is the tilt factor. The calibration camera can be calibrated by a single-disappearance or double-disappearance calibration method, so that comprehensive data about the camera can be acquired.
Preferably, the intersection pointCoordinate of Q is Q = (X)Q,YQ,ZQ) The corresponding point in the two-dimensional image coordinates is q = (x)q,yq);
Figure 976113DEST_PATH_IMAGE009
A video-based vehicle dimension measuring device, comprising:
an image acquisition module that acquires an image of a video scene; establishing a first coordinate system in the video scene, wherein the first coordinate system is a three-dimensional coordinate system;
the camera data acquisition module acquires calibration data of the camera and establishes a coordinate mapping relation between a first coordinate system and a point on an image;
the image preprocessing module acquires a foreground binary image and a foreground mask of the vehicle to be detected according to the image;
and the size acquisition module is used for acquiring the intersection point of the boundary line of the foreground mask, acquiring the coordinate of the intersection point according to the calibration data and acquiring the size of the vehicle according to the coordinate of the intersection point.
A terminal comprising a processor and a memory, the memory storing a computer program, the processor being adapted to carry out the method when executing the computer program.
A storage medium storing a computer program which, when executed by a processor, implements the method described above.
Compared with the prior art, the invention has the beneficial effects that: the size of the vehicle can be measured through the video image, the cost of vehicle size detection is greatly reduced, a plurality of vehicles in the image can be detected simultaneously, and the detection efficiency is greatly improved.
Drawings
Fig. 1 is a schematic view of a calibration rod loaded with a calibration vehicle.
Fig. 2 is a schematic diagram of a camera model and camera external parameters.
Fig. 3 is a schematic diagram of a vanishing point.
FIG. 4 is a schematic diagram of foreground mask boundaries.
FIG. 5 is a schematic diagram of a foreground mask.
FIG. 6 is a flow chart of a video-based vehicle sizing method.
FIG. 7 is another flow chart of a video-based vehicle sizing method.
Fig. 8 is a schematic view of a video-based vehicle sizing device.
Detailed Description
The following examples are further illustrative of the present invention and are not intended to be limiting thereof.
A video-based vehicle sizing method, in some embodiments of the present application, comprises:
s100, acquiring an image of a video scene; establishing a first coordinate system in the video scene, wherein the first coordinate system is a three-dimensional coordinate system;
s200, acquiring calibration data of the camera, and establishing a coordinate mapping relation between a first coordinate system and a point on an image;
s300, acquiring a foreground binary image and a foreground mask of the vehicle to be detected according to the image;
s400, acquiring an intersection point of the boundary line of the foreground mask, acquiring coordinates of the intersection point according to the calibration data, and acquiring the size of the vehicle according to the coordinates of the intersection point.
The image of the vehicle to be measured is binarized to obtain a foreground mask, the intersection point of the boundary line of the foreground mask can represent the size of the vehicle, and the coordinates of the intersection point in a video scene can be calculated from the image coordinates of the intersection point through the mapping relation between the first coordinate system and the image, so that the size of the vehicle can be calculated.
Typically the camera calibration process involves establishing a mapping of points on the image taken by the camera to the first coordinate system.
The size of the vehicle can be measured through the video image, the cost of vehicle size detection is greatly reduced, a plurality of vehicles in the image can be detected simultaneously, and the detection efficiency is greatly improved.
In some embodiments of the application, a background picture of a video scene is obtained by using a classical algorithm such as a Gaussian mixture model, and then a foreground binary image of a vehicle to be measured is obtained by a frame difference method.
In some embodiments of the application, a binary foreground map of a vehicle is obtained by using a Mask R-CNN and other deep learning algorithms.
In some embodiments of the present application, in the first coordinate system: the X coordinate axis is along the road driving direction on the ground, the Y coordinate axis is orthogonal to the road driving direction on the ground, and the Z coordinate axis is perpendicular to the ground direction; the origin of the first coordinate system is the central point of the image, and the optical axis of the camera passes through the origin of the first coordinate system.
The first coordinate system is established in such a way, so that the camera is conveniently calibrated, and the point on the image shot by the camera is conveniently mapped into the coordinate in the first coordinate system according to the camera data obtained after the camera is calibrated.
In some embodiments of the present application, s201. establishing a second coordinate system in the image, where the second coordinate system is a two-dimensional coordinate system, a row direction of the image is an x coordinate axis, a column direction of the image is a y coordinate axis, and a center of the image is an origin of the second coordinate system.
In the case of a known image, any point in the second coordinate system, i.e. on the image, can be characterized.
It should be understood that the establishment of the second coordinate system is only for convenience of explaining the technical solution of the present application, and in practical applications, the mapping relationship between the points on the image and the first coordinate system may be established only by establishing the first coordinate system.
In some embodiments of the present application, the calibration of the camera may use a road marking as a calibration template.
When the road marking is used as a calibration template, two parallel road markings are taken, and the two ends of the two parallel road markings are taken to form four points. Coordinates of the four points in the image are obtained, and a length value L and a width value W of the calibration template are measured. And further obtaining calibration data of the camera.
In some embodiments of the present application, the calibration of the camera may use a calibration rod carried by a dedicated calibration vehicle as a calibration template.
And spraying specific colors or textures at the salient positions of the vehicle so as to enable a video recognition algorithm to automatically locate the feature points, wherein at least two feature points are arranged on a horizontal plane in a direction orthogonal to the driving direction. The characteristic points can be arranged on the vehicle body or at two ends of a rod piece additionally arranged on the vehicle head. Two characteristic points are measured in advance, and the height from the ground is h ″0At a distance W' from each other.
In the calibration process, a calibrated vehicle is enabled to run at a constant speed along the advancing direction in a road, then two or more video frames are intercepted when the calibrated vehicle passes through the visual field range of a camera to be calibrated, and the image coordinate positions of the characteristic points of the vehicle are automatically identified in the video frames, as shown in fig. 1, which are respectively marked as A: (A'), (B) ((C))x A` ,y A` ), B`(x B` ,y B` ), C`(x C` ,y C` ), D`(x D` ,y D` ). The vehicle speed is set asv`Two frames spaced apart by a time interval oft`The distance between the points A 'B' can be calculated as the driving distance of the vehicle, i.e. the distance between the two points A 'B' can be calculatedL`=v`*t`. Under the condition that L 'and W' are known, the parameters of the camera can be acquired, and the calibration of the camera is completed. The specific calibration process can refer to chinese patent 202010397752.1. In some embodiments of the present application, the calibration data includes a rotation matrix R, a translation matrix T, and an internal parameter matrix K;
Figure 559541DEST_PATH_IMAGE010
;
Figure 809257DEST_PATH_IMAGE011
;
Figure 325689DEST_PATH_IMAGE012
;
wherein the content of the first and second substances,
Figure 635448DEST_PATH_IMAGE013
;
Figure 706172DEST_PATH_IMAGE014
;
Figure 166103DEST_PATH_IMAGE015
;
f is the focal length and refers to the distance from the image plane to the center of the camera lens along the optical axis;
h is the height of the camera, and refers to the vertical height from the center of the camera lens to a plane with the height of 0;
t is a pitch angle and refers to a vertical included angle of the optical axis of the camera relative to a plane with the height of 0 in the first coordinate system;
p is a rotation angle and refers to a horizontal included angle from a coordinate axis which is parallel to the traveling direction in the first coordinate system to a plane projection line of the optical axis of the camera with the height of 0 in the anticlockwise direction;
s is the rotation angle, which refers to the rotation angle of the camera along its optical axis;
Figure 5883DEST_PATH_IMAGE016
is an aspect ratio;
Figure 17702DEST_PATH_IMAGE017
is the tilt factor.
It should be understood that any calibration method that can obtain the camera data described above is within the scope of the present application.
The calibration camera may be calibrated by single or double vanishing calibration methods to obtain comprehensive data about the camera.
In some embodiments of the present application, the coordinates of the intersection point Q is Q = (X)Q,YQ,ZQ) The corresponding point in the two-dimensional image coordinates is q = (x)q,yq);
Figure 310143DEST_PATH_IMAGE018
。 (1)
It can be deduced from the above formula:
Figure 823032DEST_PATH_IMAGE019
(2)
Figure 621048DEST_PATH_IMAGE020
(3)
if the point of intersection Q lies in a plane with a height of 0, then ZQIs equal to zero, XQAnd YQCan be prepared from (x)q,yq) And calculating to obtain:
Figure 928402DEST_PATH_IMAGE021
(4)
Figure 973718DEST_PATH_IMAGE022
(5)
it should be understood that Q is for convenience only, and Q refers to any point in an image.
In some embodiments of the present application, the method of obtaining an intersection of boundary lines of the foreground mask includes:
s301, acquiring a calibration template adopted in the camera calibration process, and acquiring at least two vanishing points based on the calibration template;
s302, establishing connecting lines of all points on a boundary line of a vehicle foreground mask and a vanishing point, wherein in all the connecting lines, the connecting line closest to the bottom of the vehicle is taken as a bottom boundary, and the connecting line closest to the top of the vehicle is taken as a top boundary;
s303, establishing connecting lines of all points and another vanishing point on the boundary line of the vehicle foreground mask, wherein in all the connecting lines, the connecting line closest to the rear part of the vehicle is taken as a rear boundary, and the connecting line closest to the front part of the vehicle is taken as a front boundary;
the intersection of the bottom boundary and the front boundary is a first point, and the intersection of the top boundary and the rear boundary is a second point.
The boundary of the foreground mask is determined, then the intersection point of the boundaries is obtained, and the distance between the first point and the second point can be used as the reference of the vehicle size.
In some embodiments of the present application, the method of obtaining an intersection of boundary lines of the foreground mask includes:
sequentially marking four points of the template as A 'B' C 'D', solving the intersection point of the A 'B' and the C 'D' as a vanishing point, and marking as vp 1; the intersection of A 'C' and B 'D' is found to be another vanishing point, denoted vp2, as shown in FIG. 3.
Establishing connection lines of all points and a vanishing point vp1 on a boundary line of a vehicle foreground mask, wherein in all the connection lines, a connection line closest to the bottom of the vehicle is taken as a bottom boundary (a straight line where a line segment AB in fig. 3 is located), and a connection line closest to the top of the vehicle is taken as a top boundary (a straight line where a line segment ED in fig. 3 is located);
establishing connection lines of all points on a boundary line of a vehicle foreground mask and another vanishing point vp2, wherein in all the connection lines, a connection line closest to the rear part of the vehicle is taken as a rear boundary (a straight line of a line segment EF in the graph 3), and a connection line closest to the front part of the vehicle is taken as a front boundary (a straight line of a line segment BC in the graph 3);
the intersection of the bottom boundary and the front boundary is a first point B, and the intersection of the top boundary and the rear boundary is a second point E.
In some embodiments of the present application, s304, vertical lines of all points on the boundary line of the vehicle foreground mask are established, and of all the vertical lines, a leftmost line is referred to as a left boundary (a line on which a line segment AF in fig. 3 is located), and a rightmost line is referred to as a right boundary (a line on which a line segment CD in fig. 3 is located);
a third point A of intersection of the bottom boundary and the left boundary; the intersection point of the front boundary and the right boundary is a fourth point C; the intersection point of the top boundary and the right boundary is a fifth point D; and the intersection point of the rear boundary and the left boundary is a sixth point F.
The vertical line is parallel to the column direction (y-axis direction) in the second coordinate system.
It should be understood that the second coordinate system may be established with the image center point as the origin, and the technical solution of establishing the second coordinate system with other points as the origin to determine the intersection point of the boundary line should also be within the scope of the present application.
In some embodiments of the present application, in the second coordinate system with the central point of the image as the origin of coordinates (0, 0), the point a and the point B are points on the ground, i.e. points with zero height, and the X and Y coordinate values of the point a and the point B in the first coordinate system can be calculated by the formula (4) and the formula (5).
The length value of the AB line segment, i.e., the length value of the vehicle, is thus calculated.
Figure 900086DEST_PATH_IMAGE023
(6)
WhereinL V A vehicle length value of (A), (B)x A ,y A ),(x B ,y B ) The image coordinates of the two points AB are shown. In the case where the camera parameters are known, the vehicle length value can be calculated by equation (6).
In some embodiments of the present application, in the second coordinate system with the top left corner of the foreground binary map as the origin of coordinates (0, 0), the points B and C are points on the ground, i.e. points with zero height.
The X and Y coordinate values of the B and C points in the first coordinate system can be calculated by the formula (4) and the formula (5). The length value of the BC line segment, i.e. the width value of the vehicle, is thus calculated.
Figure 855403DEST_PATH_IMAGE024
(7)
WhereinW V Is a vehicle width value: (x B ,y B ),(x C ,y C ) The image coordinates of the two points BC, respectively. In the case where the camera parameters are known, the vehicle width value can be calculated by equation (7).
In some embodiments of the present application, s305. the method for obtaining coordinates of the intersection point according to the calibration data includes:
and acquiring the coordinates of the intersection points with the height being not zero, which are the same as the coordinates of the intersection point X, Y with the height being zero, according to the coordinates of the intersection points with the height being zero. In some embodiments of the present application, in the second coordinate system with the upper left corner of the foreground binary image as the origin of coordinates (0, 0), the point C is a point on the ground, i.e. a point with zero height, and the X and Y coordinate values of this point in the first coordinate system can be calculated by formula (4) and formula (5). In the first coordinate system, the point D is a point with the same XY coordinates as the point C and unknown height, and in the case where XY is known, the Z coordinate value of this point, i.e., the length of the line segment CD, and also the height value of the vehicle can be calculated by the equations (2) and (3).
Figure 943445DEST_PATH_IMAGE025
(8)
Figure 210478DEST_PATH_IMAGE026
(9)
Figure 940537DEST_PATH_IMAGE027
(10)
Wherein (A), (B), (C), (D), (C), (x C ,y C ),(x D ,y D ) The coordinates of the two points of the CD are respectively the coordinates of the point C calculated by the formula (8) and the formula (9) under the condition that the camera parameters are knownX CAndY C. Since the XY coordinates of the D point are the same as the C point, the height of the D point can be calculated by equation (10) by solving the equationZ DI.e. the height value of the vehicle.
It should be understood that the vehicle size may refer to one or more of the length, width and height, and the length of the line connecting one or more of the length, width and height and any two points measured by the method of the present application is within the scope of the present application.
A video-based vehicle dimension measuring device, in some embodiments of the present application, comprises:
an image acquisition module 100, wherein the image acquisition module 100 acquires an image of a video scene; establishing a first coordinate system in the video scene, wherein the first coordinate system is a three-dimensional coordinate system;
the camera data acquisition module 200 is used for calibrating a camera, acquiring calibration data of the camera and establishing a coordinate mapping relation between a first coordinate system and a point on an image;
the image preprocessing module 300 is used for acquiring a foreground binary image and a foreground mask of the vehicle to be detected according to the background image by the image preprocessing module 300;
and the size acquisition module 400 acquires an intersection point of the boundary lines of the foreground mask, acquires coordinates of the intersection point according to the calibration data, and acquires the size of the vehicle according to the coordinates of the intersection point.
In some embodiments of the present application, in the first coordinate system: the X coordinate is along the road driving direction on the ground, the Y coordinate is orthogonal to the road driving direction on the ground, and the Z coordinate is vertical to the ground direction; the origin of the first coordinate system is the central point of the image, and the optical axis of the camera passes through the origin of the first coordinate system.
In some embodiments of the present application, the method further includes a first intersection obtaining module, and the method for obtaining an intersection of the boundary lines of the foreground mask by the first intersection obtaining module includes:
acquiring a calibration template adopted in the process of calibrating the camera, and acquiring at least two vanishing points based on the calibration template;
establishing connecting lines of all points on the boundary line of the vehicle foreground mask and a vanishing point, wherein in all the connecting lines, the connecting line closest to the bottom of the vehicle is taken as a bottom boundary, and the connecting line closest to the top of the vehicle is taken as a top boundary;
establishing connecting lines of all points on a boundary line of a vehicle foreground mask and another vanishing point, wherein in all the connecting lines, the connecting line closest to the rear part of the vehicle is taken as a rear boundary, and the connecting line closest to the front part of the vehicle is taken as a front boundary;
the intersection of the bottom boundary and the front boundary is a first point, and the intersection of the top boundary and the rear boundary is a second point.
In some embodiments of the present application, the method further includes a second intersection obtaining module, and the method for obtaining an intersection of the boundary lines of the foreground mask by the second intersection obtaining module includes:
establishing all point vertical lines on the boundary line of the vehicle foreground mask, and taking the leftmost line as a left boundary and the rightmost line as a right boundary in all the vertical lines;
a third point of intersection of the bottom boundary and the left boundary; the intersection point of the front boundary and the right boundary is a fourth point; the intersection point of the top boundary and the right boundary is a fifth point; and the intersection point of the rear boundary and the left boundary is a sixth point.
In some embodiments of the present application, the method for acquiring the coordinates of the intersection point by the size acquisition module according to the calibration data includes:
and acquiring the coordinates of the intersection points with the height being not zero, which are the same as the coordinates of the intersection point X, Y with the height being zero, according to the coordinates of the intersection points with the height being zero.
In some embodiments of the present application, the camera data obtaining module obtains calibration data of a camera, where the calibration data includes a rotation matrix R, a translation matrix T, and an internal parameter matrix K;
Figure 343836DEST_PATH_IMAGE028
;
Figure 602780DEST_PATH_IMAGE029
;
Figure 12901DEST_PATH_IMAGE030
;
wherein the content of the first and second substances,
Figure 749913DEST_PATH_IMAGE031
;
Figure 804457DEST_PATH_IMAGE032
;
Figure 234301DEST_PATH_IMAGE033
;
f is the focal length and refers to the distance from the image plane to the center of the camera lens along the optical axis;
h is the height of the camera, and refers to the vertical height from the center of the camera lens to a plane with the height of 0;
t is a pitch angle and refers to a vertical included angle of the optical axis of the camera relative to a plane with the height of 0 in the first coordinate system;
p is a rotation angle and refers to a horizontal included angle from a coordinate axis which is parallel to the traveling direction in the first coordinate system to a plane projection line of the optical axis of the camera with the height of 0 in the anticlockwise direction;
s is the rotation angle, which refers to the rotation angle of the camera along its optical axis;
Figure 210347DEST_PATH_IMAGE034
is an aspect ratio;
Figure 157575DEST_PATH_IMAGE035
is the tilt factor.
The calibration camera may be calibrated by single or double vanishing calibration methods to obtain comprehensive data about the camera.
In some embodiments of the present application, the size acquisition module comprises a coordinate acquisition module that acquires coordinates of an intersection point Q, the coordinates of the intersection point Q being Q = (X)Q,YQ,ZQ) The corresponding point in the two-dimensional image coordinates is q = (x)q,yq);
Figure 801046DEST_PATH_IMAGE036
It should be understood that Q is for convenience only, and on the basis of the above embodiments, Q refers to any point in ABCDEF.
A terminal, in some embodiments of the present application, comprises a processor and a memory, the memory storing a computer program, the processor being configured to perform the method when executing the computer program.
A storage medium, in some embodiments of the present application, stores a computer program which, when executed by a processor, implements the method described above.
The above detailed description is specific to possible embodiments of the present invention, and the above embodiments are not intended to limit the scope of the present invention, and all equivalent implementations or modifications that do not depart from the scope of the present invention should be included in the present claims.

Claims (10)

1. A video-based vehicle dimension measurement method is characterized by comprising the following steps:
acquiring an image of a video scene; establishing a first coordinate system in the video scene, wherein the first coordinate system is a three-dimensional coordinate system;
acquiring calibration data of a camera, and establishing a coordinate mapping relation between a first coordinate system and a point on an image;
acquiring a foreground binary image and a foreground mask of the vehicle to be detected according to the image;
and acquiring the intersection point of the boundary line of the foreground mask, acquiring the coordinate of the intersection point according to the calibration data, and acquiring the size of the vehicle according to the coordinate of the intersection point.
2. The video-based vehicle sizing method according to claim 1, characterized in that in said first coordinate system: the X coordinate is along the road driving direction on the ground, the Y coordinate is orthogonal to the road driving direction on the ground, and the Z coordinate is vertical to the ground direction; the origin of the first coordinate system is the central point of the image, and the optical axis of the camera passes through the origin of the first coordinate system.
3. The video-based vehicle dimension measuring method according to claim 1, wherein the method of acquiring the intersection of the boundary lines of the foreground mask comprises:
acquiring a calibration template adopted in the process of calibrating the camera, and acquiring at least two vanishing points based on the calibration template;
establishing connecting lines of all points on the boundary line of the vehicle foreground mask and a vanishing point, wherein in all the connecting lines, the connecting line closest to the bottom of the vehicle is taken as a bottom boundary, and the connecting line closest to the top of the vehicle is taken as a top boundary;
establishing connecting lines of all points on a boundary line of a vehicle foreground mask and another vanishing point, wherein in all the connecting lines, the connecting line closest to the rear part of the vehicle is taken as a rear boundary, and the connecting line closest to the front part of the vehicle is taken as a front boundary;
the intersection of the bottom boundary and the front boundary is a first point, and the intersection of the top boundary and the rear boundary is a second point.
4. The video-based vehicle dimension measuring method of claim 1, wherein the method of obtaining intersections of the boundary lines of the foreground mask further comprises:
establishing vertical lines of all points on a boundary line of a vehicle foreground mask, and taking the leftmost line as a left boundary and the rightmost line as a right boundary in all the vertical lines;
a third point of intersection of the bottom boundary and the left boundary; the intersection point of the front boundary and the right boundary is a fourth point; the intersection point of the top boundary and the right boundary is a fifth point; and the intersection point of the rear boundary and the left boundary is a sixth point.
5. The video-based vehicle sizing method according to claim 1, wherein the method of obtaining coordinates of the intersection points from the calibration data comprises:
and acquiring the coordinates of the intersection points with the height being not zero, which are the same as the coordinates of the intersection point X, Y with the height being zero, according to the coordinates of the intersection points with the height being zero.
6. The video-based vehicle sizing method according to claim 1, wherein said calibration data comprises a rotation matrix R, a translation matrix T and an internal parameter matrix K;
Figure 457303DEST_PATH_IMAGE001
;
Figure 186224DEST_PATH_IMAGE002
;
Figure 796197DEST_PATH_IMAGE003
;
wherein the content of the first and second substances,
Figure 949967DEST_PATH_IMAGE004
;
Figure 456035DEST_PATH_IMAGE005
;
Figure 672252DEST_PATH_IMAGE006
;
f is the focal length and refers to the distance from the image plane to the center of the camera lens along the optical axis;
h is the height of the camera, and refers to the vertical height from the center of the camera lens to a plane with the height of 0;
t is a pitch angle and refers to a vertical included angle of the optical axis of the camera relative to a plane with the height of 0 in the first coordinate system;
p is a rotation angle and refers to a horizontal included angle from a coordinate axis which is parallel to the traveling direction in the first coordinate system to a plane projection line of the optical axis of the camera with the height of 0 in the anticlockwise direction;
s is the rotation angle, which refers to the rotation angle of the camera along its optical axis;
Figure 554758DEST_PATH_IMAGE007
is an aspect ratio;
Figure 438400DEST_PATH_IMAGE008
is the tilt factor.
7. The video-based vehicle dimension measuring method according to claim 6, wherein the coordinate of the intersection point Q is Q = (X)Q,YQ,ZQ) The corresponding point in the two-dimensional image coordinates is q = (x)q,yq);
Figure 380948DEST_PATH_IMAGE009
8. The video-based vehicle dimension measuring method apparatus of claim 1, comprising:
an image acquisition module that acquires an image of a video scene; establishing a first coordinate system in the video scene, wherein the first coordinate system is a three-dimensional coordinate system;
the camera data acquisition module acquires calibration data of the camera and establishes a coordinate mapping relation between a first coordinate system and a point on an image;
the image preprocessing module acquires a foreground binary image and a foreground mask of the vehicle to be detected according to the image;
and the size acquisition module is used for acquiring the intersection point of the boundary line of the foreground mask, acquiring the coordinate of the intersection point according to the calibration data and acquiring the size of the vehicle according to the coordinate of the intersection point.
9. A terminal, comprising a processor and a memory, the memory storing a computer program, the processor being configured to implement the method of any one of claims 1 to 7 when executing the computer program.
10. A storage medium, in which a computer program is stored which, when executed by a processor, implements the method of any one of claims 1 to 7.
CN202011423390.5A 2020-12-08 2020-12-08 Video-based vehicle size measuring method and device, terminal and storage medium Pending CN112489106A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011423390.5A CN112489106A (en) 2020-12-08 2020-12-08 Video-based vehicle size measuring method and device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011423390.5A CN112489106A (en) 2020-12-08 2020-12-08 Video-based vehicle size measuring method and device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN112489106A true CN112489106A (en) 2021-03-12

Family

ID=74940749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011423390.5A Pending CN112489106A (en) 2020-12-08 2020-12-08 Video-based vehicle size measuring method and device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN112489106A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011388A (en) * 2021-04-23 2021-06-22 吉林大学 Vehicle outer contour size detection method based on license plate and lane line
CN114863083A (en) * 2022-04-06 2022-08-05 包头钢铁(集团)有限责任公司 Method and system for positioning vehicle and measuring size
CN114926523A (en) * 2022-05-06 2022-08-19 杭州海康威视系统技术有限公司 Building height measuring method and equipment
CN117115760A (en) * 2023-07-21 2023-11-24 中铁十局集团第三建设有限公司 Engineering vehicle height limit detection method and system based on video picture
CN117274956A (en) * 2023-11-17 2023-12-22 深圳市航盛电子股份有限公司 Vehicle side view generation method, device, terminal equipment and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011388A (en) * 2021-04-23 2021-06-22 吉林大学 Vehicle outer contour size detection method based on license plate and lane line
CN113011388B (en) * 2021-04-23 2022-05-06 吉林大学 Vehicle outer contour size detection method based on license plate and lane line
CN114863083A (en) * 2022-04-06 2022-08-05 包头钢铁(集团)有限责任公司 Method and system for positioning vehicle and measuring size
CN114926523A (en) * 2022-05-06 2022-08-19 杭州海康威视系统技术有限公司 Building height measuring method and equipment
CN117115760A (en) * 2023-07-21 2023-11-24 中铁十局集团第三建设有限公司 Engineering vehicle height limit detection method and system based on video picture
CN117274956A (en) * 2023-11-17 2023-12-22 深圳市航盛电子股份有限公司 Vehicle side view generation method, device, terminal equipment and storage medium
CN117274956B (en) * 2023-11-17 2024-05-24 深圳市航盛电子股份有限公司 Vehicle side view generation method, device, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110148169B (en) Vehicle target three-dimensional information acquisition method based on PTZ (pan/tilt/zoom) pan-tilt camera
CN112489106A (en) Video-based vehicle size measuring method and device, terminal and storage medium
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN110322702B (en) Intelligent vehicle speed measuring method based on binocular stereo vision system
CN109074668B (en) Path navigation method, related device and computer readable storage medium
CN108885791B (en) Ground detection method, related device and computer readable storage medium
CN112037159B (en) Cross-camera road space fusion and vehicle target detection tracking method and system
CN111563921B (en) Underwater point cloud acquisition method based on binocular camera
CN110842940A (en) Building surveying robot multi-sensor fusion three-dimensional modeling method and system
CN106156723B (en) A kind of crossing fine positioning method of view-based access control model
CN107389026A (en) A kind of monocular vision distance-finding method based on fixing point projective transformation
CN108230393A (en) A kind of distance measuring method of intelligent vehicle forward vehicle
US9336595B2 (en) Calibration device, method for implementing calibration, and camera for movable body and storage medium with calibration function
CN103729837A (en) Rapid calibration method of single road condition video camera
CN112902874B (en) Image acquisition device and method, image processing method and device and image processing system
CN112184792B (en) Road gradient calculation method and device based on vision
CN109410264A (en) A kind of front vehicles distance measurement method based on laser point cloud and image co-registration
CN111932627B (en) Marker drawing method and system
JP5310027B2 (en) Lane recognition device and lane recognition method
CN111046843A (en) Monocular distance measurement method under intelligent driving environment
CN112017249A (en) Vehicle-mounted camera roll angle obtaining and mounting angle correcting method and device
CN111476798B (en) Vehicle space morphology recognition method and system based on contour constraint
CN111382591A (en) Binocular camera ranging correction method and vehicle-mounted equipment
CN110197104B (en) Distance measurement method and device based on vehicle
CN112446915A (en) Picture-establishing method and device based on image group

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination