CN110717445B - Front vehicle distance tracking system and method for automatic driving - Google Patents
Front vehicle distance tracking system and method for automatic driving Download PDFInfo
- Publication number
- CN110717445B CN110717445B CN201910953010.XA CN201910953010A CN110717445B CN 110717445 B CN110717445 B CN 110717445B CN 201910953010 A CN201910953010 A CN 201910953010A CN 110717445 B CN110717445 B CN 110717445B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- camera
- contour
- image
- coordinate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000001514 detection method Methods 0.000 claims abstract description 48
- 239000011159 matrix material Substances 0.000 claims description 26
- 230000003287 optical effect Effects 0.000 claims description 20
- 238000004364 calculation method Methods 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 13
- 238000013528 artificial neural network Methods 0.000 claims description 11
- 230000009466 transformation Effects 0.000 claims description 11
- 238000013519 translation Methods 0.000 claims description 10
- 230000000007 visual effect Effects 0.000 claims description 8
- 238000013527 convolutional neural network Methods 0.000 claims description 6
- 238000003062 neural network model Methods 0.000 claims description 3
- 238000011426 transformation method Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S11/00—Systems for determining distance or velocity not using reflection or reradiation
- G01S11/12—Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a front vehicle distance tracking system and a method for automatic driving, wherein the method comprises the following steps: the data acquisition unit is used for acquiring an image sequence at equal time intervals from the camera; the vehicle detection unit is used for extracting a contour outline of a front vehicle from the image; the coordinate positioning unit is used for calculating the real position of the front vehicle according to the pixel coordinates of the vehicle outline frame line in the image and the camera parameters; the vehicle tracking unit identifies the same vehicle in the plurality of images through front wheel outline frame lines of all the images in the series image sequence, and numbers the vehicle; the system converts the input image sequence into an XML-formatted file of a sequence of distances of the vehicle ahead, based on the vehicle position calculated by the coordinate locating unit and the vehicle number calculated by the vehicle tracking unit. By the technical scheme, the detection and tracking of the position of the front vehicle on the road can be realized by using a single vehicle-mounted camera, and the road sensing, automatic obstacle avoidance and auxiliary decision making in an automatic driving system are facilitated.
Description
Technical Field
The invention relates to the technical field of vision-assisted automatic driving, in particular to a front vehicle distance tracking system for automatic driving and a vehicle distance measuring method based on a two-dimensional image.
Background
Automatic driving based on visual assistance is the mainstream solution of current automatic driving technology. Cameras and lidar are the two most commonly used vehicle-mounted vision sensors, with cameras collecting two-dimensional image data and lidar collecting three-dimensional point cloud data. Compared with a camera, the laser radar is high in manufacturing cost at present, so that the camera is the scheme with the lowest cost for realizing vision-assisted automatic driving.
On an automatic driving platform of a vehicle-mounted camera, image-oriented road condition recognition and understanding are core problems of automatic driving. Automatic decision-making technologies such as vehicle avoidance and path planning related to automatic driving all need to acquire position information of a front vehicle. Meanwhile, by tracking the vehicles on the road ahead, the system can further serve scene understanding and driving decision tasks such as driving behavior prejudgment, early braking and the like of the vehicles ahead. Therefore, the image-based front vehicle distance tracking method and system are key technologies for automatic driving.
In the prior art, the traditional graphics operator is usually adopted to extract the front vehicle information in the image, the method cannot be well adapted to complex road conditions, and misjudgment is easily caused for the conditions of night, mottled tree shadow, a road surface with textures and mutual shielding of vehicles. Meanwhile, in the prior art, the relevance of multi-frame images is mostly not considered, different images of the same vehicle are not matched from continuous multi-frame images, and the continuous tracking of the position of the front vehicle is neglected, so that the application range of the distance measurement result is limited. Specifically, the following difficulties exist in assisting automatic driving decision by using the traditional front vehicle visual ranging method:
1) complexity: the application scene of automatic driving is very complicated, the illumination degree, the road texture and the shielding condition of images shot under different road conditions are different, and the traditional method lacks robustness under a strong interference complex scene;
2) representation capability: the existing front vehicle distance measuring method does not integrate a vehicle tracking technology, cannot connect a plurality of front vehicle instantaneous coordinates in series into a front vehicle track, and cannot estimate the driving behavior of the front vehicle from the front vehicle track.
Disclosure of Invention
The invention aims to: and tracking the track of the front vehicle and calculating the position of the front vehicle based on the image sequence shot by the vehicle-mounted camera, so as to assist the automatic driving decision.
The technical scheme of the invention provides a front vehicle distance tracking system for automatic driving, which is characterized by comprising the following components: the system comprises a data acquisition unit, a vehicle detection unit, a coordinate positioning unit and a vehicle tracking unit;
the data acquisition unit is used for extracting an image sequence with equal time intervals from the camera and simultaneously recording parameters of the camera, wherein the parameters comprise internal parameters representing the focal length, the projection center, the inclination and the distortion of the camera and external parameters representing the position translation and the position rotation of the camera;
the vehicle detection unit comprises a plurality of vehicle detection modules, wherein each vehicle detection module is responsible for processing a single image; the vehicle detection module identifies vehicles in the images by using a classifier constructed by a deep neural network model, and fits a contour outline of the vehicle by using a regressor, so that the front vehicle is positioned on a single image;
the coordinate positioning unit comprises a plurality of coordinate positioning modules, wherein each coordinate positioning module is responsible for processing a single vehicle outline frame line; for each vehicle contour outline, the coordinate positioning module transforms the coordinates of the contour outline pixels of the front vehicle into the coordinates of the vehicle body where the vehicle-mounted camera is located by a geometric transformation method according to the pixels of the four vertexes of the outline and the camera parameters obtained by the data acquisition unit, so as to determine the spatial position of the front vehicle relative to the vehicle;
the vehicle tracking unit is used for processing and identifying the same vehicle in the multi-frame images to form a driving track of each vehicle; the vehicle tracking unit identifies the same vehicle appearing in different pictures, gives the same unique id number to the vehicle, and connects the driving tracks in series, thereby realizing the front vehicle tracking.
Further, the vehicle detection unit comprises a candidate region generation module, a vehicle discrimination module and a contour outline regression module;
the candidate region generation module stores a plurality of anchor points with different sizes, and each anchor point is a rectangular region composed of a plurality of pixels;
the vehicle distinguishing module inputs the candidate area which is generated by the candidate area generating module and possibly contains the vehicle and outputs information whether the candidate area contains the vehicle or not;
and the outline regression module finely adjusts the outline of the vehicle in the picture on the basis of the candidate region coordinates according to the mask region convolution neural network regressor.
Further, according to the vehicle detection unit, calculating a contour outline of the vehicle in the image by using a mask region convolution neural network; the calculation of the mask region convolutional neural network is divided into three steps:
step 1, extracting alternative areas, traversing the whole image from left to right and from top to bottom by a plurality of preset anchor points with different sizes, and calculating the position of a rectangular block which can be used as a vehicle outline;
step 2, carrying out regional object classification, extracting visual features of the regions by using a convolutional neural network for each candidate region, and judging the category of the objects in the regions by using a multilayer perceptron through the visual features;
and 3, finishing the coordinates of the contour outline, and regressing the offset of the contour outline of the candidate area relative to the contour outline of the detection target by using a neural network for each candidate area so as to further fit the contour outline of the detection target.
Further, the coordinate locating unit is configured to:
according to the coordinates of the vehicle in the image, which are obtained by the vehicle detection unit, the coordinates of the vehicle in a world coordinate system are obtained through a camera coordinate-world coordinate transformation formula, wherein the coordinate transformation formula is as follows:
sm=A[R|t]M
wherein M ═ x w ,y w ,z w ] T Is a three-dimensional coordinate under the world coordinate system, m ═ x i ,y i ] T The two-dimensional coordinates of the bottom center point of the outline frame line of the vehicle detected in the image are R and t are respectively a rotation matrix and a translation matrix in an external parameter matrix of the camera; a is the intra-camera parameter matrix, - A[1,1]=f x ,A[1,3]=c x ,A[2,2]=f y ,A[2,3]=c y ,A[3,3]1, the remaining positions of a are all 0; wherein f is x ,f y ,c x ,c y The focal length of the camera in the x-axis direction, the focal length in the y-axis direction, the optical center in the x-axis direction and the optical center in the y-axis direction are respectively; s is the depth of field; where M, a, R, t are known quantities that can be obtained from the data acquisition unit, and s, M are unknown quantities to be found.
Further, the calculation of the coordinate locating unit is divided into 2 steps:
step 1, depth of field estimation, namely, taking a point on the ground of a horizontal line at the bottom of a vehicle frame line on an image, wherein the two-dimensional coordinate of the point in the image is m g The z-direction component z of the world coordinate of the point w When the ground point and the bottom center point of the vehicle frame line are located at the same horizontal plane of the image, the depth of field s of the two pixel points is the same, and thus, the first linear equation set (e31) can be obtained
sm g =A[R|t]M 0 (e31)
Step 2, according to the world coordinate solving step, obtaining a second linear equation set (e32) for the bottom center point m of the vehicle frame line by a coordinate transformation formula
sm=A[R|t]M (e32)
By combining the two equations, the known quantity in equation set (e31) can be used to eliminate the unknown quantity depth of field s in equation set (e32) to find the world coordinate M of the bottom center point of the vehicle frame line.
Further, the vehicle tracking unit comprises a distance calculation module and a distance matching module
The distance calculation module calculates the pixel distance from the centers of a plurality of contour frame lines of a first frame to a plurality of contour frame lines of a second frame between two frames of images according to the contour frame lines of each vehicle of the two frames of images, and a group of inter-frame matching contour frame lines with the closest distance are regarded as the contour frame lines of the two frames of images of the same vehicle;
the distance matching module preferentially matches vehicle outline frame lines closest to each other in two adjacent pictures in the image sequence according to a closest matching principle, assigns the same vehicle ID mark to a group of adjacent vehicle outline frame lines obtained by matching, removes the matched vehicle outline frame lines, and continues matching in the remaining outline frame lines in the two adjacent pictures according to the closest principle until all the vehicle outline frame lines in one picture in the two adjacent pictures are completely matched.
The invention also provides a method for tracking the front vehicle distance tracking system for automatic driving, which specifically comprises the following steps:
step 1: calibrating a camera, namely calibrating the position parameters and the optical parameters of the vehicle-mounted camera and recording the position parameters and the optical parameters in system data acquisition software;
the camera position parameters comprise the distances from the fixed position of the camera to the vehicle head, the vehicle chassis and two sides of the vehicle body and the three-dimensional angle of the camera relative to the vehicle chassis;
step 2: the method comprises the following steps of identifying and positioning a front vehicle, wherein the identification and positioning of the front vehicle are realized through a vehicle detection unit and a coordinate positioning unit;
and step 3: the method comprises the steps of tracking the track of a front vehicle, identifying vehicles which repeatedly appear in an image sequence through a vehicle tracking and positioning unit, giving different unique IDs to all different vehicles appearing in the image sequence as differences, and outputting the track sequence of each vehicle to an XML file.
The invention has the beneficial effects that: the mask area neural network is used for identifying the vehicle outline in the image, so that the robustness of vehicle detection in complex road condition scenes is improved. Through the coordinate transformation based on the horizon line pixel points, the three-dimensional coordinate points of the vehicle are restored from the image more accurately. Through the vehicle tracking unit, the system not only has a distance measuring function of the front vehicle, but also has a vehicle tracking function, so that the motion track of the front vehicle can be better mastered, the movement direction of the front vehicle is pre-judged, and the driving decision of the vehicle is more pre-judged and safer.
Drawings
The advantages of the above and/or additional aspects of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic block diagram of a leading vehicle distance tracking method and system for autonomous driving according to one embodiment of the present invention;
FIG. 2 is a schematic diagram of a coordinate transformation calculation process according to one embodiment of the invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments of the present invention and features of the embodiments may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
The embodiment is as follows:
embodiments of the present invention will be described below with reference to fig. 1 to 2.
As shown in fig. 1, the present embodiment provides a leading vehicle distance tracking system 100 for automatic driving, including: a data acquisition unit 10, a vehicle detection unit 20, a coordinate locating unit 30 and a vehicle tracking unit 40.
The data acquisition unit is used for extracting an image sequence with equal time intervals from the camera and simultaneously recording parameters of the camera, wherein the parameters comprise internal parameters representing the focal length, the projection center, the inclination and the distortion of the camera and external parameters representing the position translation and the position rotation of the camera; in this embodiment, the data acquisition unit 10 includes data acquisition hardware devices and data acquisition software. The data acquisition hardware is a camera fixed on the top of the vehicle, the lens orientation of the camera is parallel to the chassis of the vehicle, and the distances from the camera to the vehicle head, the chassis and the two sides of the vehicle body are measured and recorded in data acquisition software. When the system is driven, the camera is turned on to shoot road conditions, and shot image sequences are transmitted to subsequent units of the system through data acquisition software.
The data acquisition software is used for transmitting the shot road condition image sequence information from the camera and recording the parameter information of the camera, and provides data support for the processing of the subsequent units of the system. Specifically, the data acquisition software is divided into an image sequence acquisition module and a camera parameter acquisition module.
The image sequence acquisition module is used for acquiring road condition image sequences from the camera and transmitting the road condition image sequences to the subsequent units. A vehicle-mounted front camera shoots a video of the road condition in front, including the conditions of roads, vehicles and pedestrians. The recorded video is cut into image sequences with equal intervals according to a fixed frame rate. The image sequence contains a plurality of pictures, which are denoted by "picture 1" and "picture 2" … … "picture n" in fig. 1, where n represents the total number of pictures in the image sequence. In the image sequence, the pictures follow the chronological precedence relationship, and the intervals of the shooting time of two adjacent pictures are equal.
The camera parameter acquisition module records external parameters and internal parameters of the camera and transmits the external parameters and the internal parameters to the subsequent unit. Specifically, the external parameters of the camera are spatial position parameters of the camera placed on the vehicle body, and are stored by using a rotation matrix and a translation vector; the extrinsic parameters of the camera are the optical parameters of the camera itself, including the focal length and the components of the optical center of the camera in both the x-axis and y-axis directions.
The vehicle detection unit 20 comprises a plurality of vehicle detection modules, wherein each vehicle detection module is responsible for processing a single image; the vehicle detection module identifies vehicles in the images by using a classifier constructed by a deep neural network model, and fits a contour outline of the vehicle by using a regressor, so that the front vehicle is positioned on a single image;
in this embodiment, the vehicle detection unit 20 includes: the n vehicle detection subunits, denoted in fig. 1 by "vehicle detection unit 21", "vehicle detection unit 22" … … "vehicle detection unit 2 n", where n again denotes the total number of pictures in the image sequence to be processed. In particular, one vehicle detection subunit is responsible for processing one picture in the image sequence. Each vehicle detection subunit has the exact same configuration, except that the pictures processed are different.
Specifically, one vehicle detection unit 20 is composed of the following functional modules: the device comprises a candidate region generation module, a vehicle discrimination module and a contour outline regression module.
The candidate region generation module is configured to: the candidate area generation module stores a plurality of anchor points with different sizes, and each anchor point is a rectangular area formed by a plurality of pixels. The candidate area generation module sequentially moves the anchor points from the upper left corner of the picture to the lower right corner of the picture from left to right and from top to bottom, and moves the anchor points one pixel unit at a time. And at each position where the anchor point moves, the candidate area generation module judges whether the rectangular area covered by the anchor point possibly contains the vehicle or not according to the pixel characteristics, and if the rectangular area covered by the anchor point possibly contains the vehicle, the position of the anchor point in the picture is recorded as the candidate area.
The vehicle discrimination module is configured to: the vehicle determination module inputs the candidate region that is generated by the candidate region generation module and is likely to contain the vehicle, and outputs information as to whether the candidate region contains the vehicle. Further, the vehicle distinguishing module extracts and classifies the pixel characteristics of the candidate region according to the mask region convolutional neural network classifier, and determines whether the candidate region contains the vehicle according to the probability of each class output by the classifier. Specifically, if the probability of the category "vehicle" is the highest among the probabilities output by the classifier, the candidate region is considered to contain a vehicle, otherwise, the candidate region is considered to contain no vehicle, and subsequent processing is not required.
The contour outline regression module is configured to: and finely adjusting the outline frame line of the vehicle in the picture on the basis of the candidate region coordinates according to the mask region convolution neural network regressor. Further, the contour outline regression module is configured to: the outline border is represented by the left side (x, y) of the upper left corner of the border and the width and height (w, h) of the outline border. Each fig is converted into a set of contour outline parameters for each vehicle in the picture by the vehicle detection unit: ((x) 1 ,y 1 ,w 1 ,h 1 ),(x 2 ,y 2 ,w 2 ,h 2 ),…,(x k ,y k ,w k ,h k )). Where the variable k indicates that k vehicles are included in the picture fig, k may be a different value for different pictures in the image sequence.
Calculating a contour outline of the vehicle in the image by using a mask region convolution neural network according to the vehicle detection unit; the calculation of the mask region convolutional neural network is divided into three steps:
step 1, extracting alternative areas, traversing the whole image from left to right and from top to bottom by a plurality of preset anchor points with different sizes, and calculating the position of a rectangular block which can be used as a vehicle outline;
step 2, carrying out regional object classification, extracting visual features of the regions by using a convolutional neural network for each candidate region, and judging the category of the objects in the regions by using a multilayer perceptron through the visual features;
and 3, finishing the coordinates of the contour outline, and regressing the offset of the contour outline of the candidate area relative to the contour outline of the detection target by using a neural network for each candidate area so as to further fit the contour outline of the detection target.
The coordinate locating unit 30 includes a number of coordinate locating modules, each of which is responsible for processing a single vehicle contour outline; for each vehicle contour frame line, the coordinate positioning module transforms the contour frame line pixel coordinates of the front vehicle into the vehicle body coordinates of the vehicle-mounted camera through a geometric transformation method according to the pixels of the four vertexes of the frame line and the camera parameters obtained by the data acquisition unit (10), so that the spatial position of the front vehicle relative to the vehicle is determined;
in this embodiment, the coordinate locating unit 30 includes: n coordinate locating subunits, which are indicated in fig. 1 by the "coordinate locating unit 31" and the "coordinate locating unit 32" … … "coordinate locating unit 3 n", where n again indicates the total number of pictures in the image sequence to be processed. Specifically, a coordinate positioning subunit is responsible for processing a group of contour outline parameter sets obtained by a picture in the image sequence through a vehicle detection unit. Each coordinate positioning subunit has the exact same configuration except that the set of outline frame parameters being processed is different.
Specifically, the coordinate locating unit is configured to: according to the coordinates of the vehicle outline frame line in the image, which are obtained by the vehicle detection unit, the coordinates of the vehicle in a world coordinate system are obtained through a camera coordinate-world coordinate transformation formula, wherein the coordinate transformation formula is as follows:
sm=A[R|t]M
wherein M ═ x w ,y w ,z w ] T Is a three-dimensional coordinate under the world coordinate system, m ═ x i ,y i ] T R and t are respectively a rotation matrix and a translation matrix in an external parameter matrix of the camera for the two-dimensional coordinates of the point in a picture shot by the camera. A is the camera intrinsic parameter matrix, A1, 1]=f x ,A[1,3]=c x ,A[2,2]=f y ,A[2,3]=c y ,A[3,3]1, the remaining positions of a are all 0, wherein f x ,f y ,c x ,c y The focal length of the camera in the x-axis direction, the focal length in the y-axis direction, the optical center in the x-axis direction, and the optical center in the y-axis direction, respectively. s is the depth of field.
According to the above formula, the calculation of the coordinate positioning unit is divided into 2 steps: a depth of field estimation step and a world coordinate solving step. The depth of field estimation step is to take a reference point on the ground of the horizontal line at the bottom of the vehicle frame line on the image, and the two-dimensional coordinate of the reference point in the image is m g The z-direction component z of the world coordinate of the point w And if the depth of field is 0, the ground point and the bottom center point of the vehicle frame line are positioned on the same horizontal plane of the image, and the depth of field s of the two pixel points is the same. From this, a first system of linear equations (e31) can be derived
sm g =A[A|t]M 0
Wherein M is 0 =[x 0 ,y 0 ,0] T As three-dimensional coordinates of the reference point in the world coordinate system, m g =[x g ,y g ] T R and t are respectively a rotation matrix and a translation matrix in an external parameter matrix of the camera. A is the camera intrinsic parameter matrix, A1, 1]=f x ,A[1,3]=c x ,A[2,2]=f y ,A[2,3]=c y ,A[3,3]1, the remaining positions of a are all 0, wherein f x ,f y ,c x ,c y The focal length of the camera in the x-axis direction, the focal length in the y-axis direction, the optical center in the x-axis direction, and the optical center in the y-axis direction, respectively. s is the depth of field.
The world coordinate solving step, a second linear equation set is obtained by the coordinate transformation formula for the bottom central point m of the vehicle frame line (e32)
sm=A[R|t]M
Wherein M ═ x w ,y w ,z w ] T For the three-dimensional coordinates of the bottom center of the outline frame line of the vehicle detected in the image under the world coordinate system, m ═ x i ,y i ] T R and t are respectively a rotation matrix and a translation matrix in an external parameter matrix of the camera for two-dimensional coordinates of the bottom center point of the outline frame line of the vehicle detected in the image. A is the camera intrinsic parameter matrix, A1, 1]=f x ,A[1,3]=c x ,A[2,2]=f y ,A[2,3]=c y ,A[3,3]1, the remaining positions of a are all 0, wherein f x ,f y ,c x ,c y The focal length of the camera in the x-axis direction, the focal length in the y-axis direction, the optical center in the x-axis direction, and the optical center in the y-axis direction, respectively. s is the depth of field.
By combining the two equations, the known quantity in equation set (e31) can be used to eliminate the unknown quantity depth of field s in equation set (e32) to find the world coordinate M of the bottom center point of the vehicle frame line.
Further, for each vehicle contour outline (x, y, w, h) in each image, the center point coordinates (x + w/2, y + h) of the vehicle contour bottom are calculated, i.e., m ═ x + w/2, y + h 0 On the same horizontal line as m, and therefore m is accordingly 0 From this the quantity m, m is known (x', y + h) 0 And A, R, t are all given, the simultaneous equations (e31, e32) can solve for the world coordinates (X, Y, Z) of the bottom of the trailing edge of the vehicle body.
The vehicle tracking unit 40 is responsible for processing and identifying the same vehicle in the multi-frame images to form a driving track of each vehicle; the vehicle tracking unit 40 identifies the same vehicle, gives a unique id number to the same vehicle, and connects the trajectories in series, thereby realizing the tracking of the preceding vehicle.
Calculating the pixel distance from the centers of a plurality of contour frame lines of a first frame to a plurality of contour frame lines of a second frame between two frames of images according to the contour frame lines of each vehicle of the two frames of images, wherein a group of inter-frame matching contour frame lines closest to the centers are regarded as the contour frame lines of the two frames of images of the same vehicle; therefore, every two adjacent images of the whole image sequence are calculated, all inter-frame matching outline frame lines can be obtained, all different vehicles in the image sequence correspond to the inter-frame matching outline frame lines, and for each vehicle, the outline frame lines of the vehicles in each frame are connected in series, so that the driving track of each vehicle is obtained.
In this embodiment, the vehicle tracking unit 40 includes: the distance calculating module and the distance matching module.
Specifically, the distance calculation module calculates the pixel distance from the centers of a plurality of contour frame lines of a first frame to a plurality of contour frame lines of a second frame between two frames of images according to the contour frame lines of each vehicle of the two frames of images, and a group of inter-frame matching contour frame lines closest to each other are regarded as the contour frame lines of the two frames of images of the same vehicle. And performing distance calculation on every two adjacent pictures of the whole image sequence to obtain the distance of all corresponding vehicle outline frame lines in the two adjacent pictures.
Specifically, the distance matching module is configured to: preferentially matching two adjacent pictures in the image sequence with the vehicle outline frame line closest to the two adjacent pictures according to the closest matching principle, giving the same vehicle ID mark to the adjacent group of vehicle outline frame lines obtained by matching, then removing the matched vehicle outline frame line, and continuing to match the rest of the outline frame lines in the two adjacent pictures according to the closest principle until all the vehicle outline frame lines in one picture in the two adjacent pictures are completely matched, and ignoring the rest of the vehicle outline frame lines in the other picture. And performing the matching operation on all adjacent picture pairs in the whole image sequence to obtain all inter-frame matching contour outline border lines and the unique ID of the vehicle corresponding to the contour outline border lines. For each vehicle with a unique ID, the outline frame lines of the vehicles in each frame are connected in series to obtain the driving track of each vehicle.
Further, the distance calculation module is configured to: for two adjacent images of the same image sequence, there are vehicle contour sequences respectively:
where k and k' refer to the images fig respectively 1 And fig 2 The number of vehicle contour outline lines calculated by the vehicle detection unit 20. Defining a distance
Further, the distance matching module is configured to: for image fig 1 Each of the vehicle outline frame linesFinding image fig according to the recent principle 2 Outline frame line ofAnd (6) matching. The matching of all the two adjacent images in an image sequence is sequentially calculated, and a plurality of continuous matching strings can be obtainedWhere K is the number of occurrences of vehicle i in the image sequence. By integrating the results of the coordinate locating unit 30, a coordinate series of the vehicle i can be obtainedAnd outputting the coordinate sequence in an XML format as a final output result of the front vehicle distance tracking system.
The embodiment also provides a front vehicle distance tracking method for automatic driving, which specifically comprises the following steps:
step 1: and (3) calibrating a camera, wherein the camera calibration step requires that the position parameters and the optical parameters of the vehicle-mounted camera are calibrated and recorded in system data acquisition software.
Specifically, the camera position parameters comprise the distances from the fixed position of the camera to the vehicle head, the vehicle chassis and two sides of the vehicle body and the solid angle of the camera relative to the vehicle chassis. The distance between the camera and the two sides of the vehicle head, the vehicle chassis and the vehicle body is represented by a translation matrix t in the camera external parameter matrix, and the solid angle between the camera and the vehicle chassis is represented by a rotation matrix R in the camera external parameter matrix.
The optical parameters of the camera are represented by an intra-camera parameter matrix A, where A [1,1 ]]=f x ,A[1,3]=c x ,A[2,2]=f y ,A[2,3]=c y ,A[3,3]1, the remaining positions of a are all 0, wherein f x ,f y ,c x ,c y The focal length of the camera in the x-axis direction, the focal length in the y-axis direction, the optical center in the x-axis direction, and the optical center in the y-axis direction, respectively.
Step 2: the recognition and positioning of the preceding vehicle is realized by the vehicle detection unit 20 and the coordinate positioning unit 30.
Specifically, for a set of image sequences, n equally time-spaced pictures are included: "picture 1", "picture 2" … … "picture n", n vehicle detection subunits are in one-to-one correspondence with each frame of picture in the image sequence. Each vehicle detection subunit detects contour outline lines of all vehicles in one picture, and each contour outline line is represented by the coordinates of the upper left corner of the contour and the width and height of the contour. For a picture containing k vehicles, the front vehicle identification and location step will result in the following set of k sets of outline frame lines:
((x 1 ,y 1 ,w 1 ,h 1 ),(x 2 ,y 2 ,w 2 ,h 2 ),…,(x k ,y k ,w k ,h k ) X, y, w, h respectively represent the abscissa of the top-left pixel, the ordinate of the top-left pixel, the width and the height of the outline. Subscripts 1,2, … …, k correspond to the 1 st, 2 nd, … … th, k th vehicles in the picture, respectively.
For each contour frame line in each frame of picture, the coordinate positioning unit firstly obtains the two-dimensional coordinate M of the midpoint of the bottom of the vehicle on the vehicle contour frame line in the picture according to the frame line parameters (X, Y, w, h) (X + w/2, Y + h/2) — further, the two-dimensional coordinate of the midpoint of the bottom of the vehicle is converted into the three-dimensional coordinate M in the real world coordinate system by the coordinate positioning unit, which is (X, Y,0) — wherein X and Y are the transverse distance and the longitudinal distance of the vehicle in front of the real world coordinate system relative to the vehicle.
And step 3: the trajectory of the vehicle in front is tracked. And identifying vehicles which repeatedly appear in the image sequence through a vehicle tracking and positioning unit, endowing different vehicles appearing in the image sequence with different unique IDs as differences, and outputting the track sequence of each vehicle to an XML file.
Each frame of picture in the image sequence is converted into a plurality of coordinate points, which respectively represent the position of the vehicle ahead in the frame of picture, through step 2. The vehicle tracking and positioning unit identifies the same vehicle in two adjacent frames, so that the vehicle coordinates in different pictures of the whole image sequence are arranged into a plurality of continuous tracks, and each track corresponds to a front vehicle with a unique ID. One track is in the shape ofX, Y represent the lateral and longitudinal distance, respectively, of the vehicle in front with respect to the host vehicle, the subscript T, T +1, … …, T represents the continuous time sequence of the presence of the vehicle in the camera, and the superscript i represents the unique ID. vehicle trajectory of the vehicle, which is ultimately saved in an XML file, an example of which is as follows:
the steps in the present application may be sequentially adjusted, combined, and subtracted according to actual requirements.
The units in the device can be merged, divided and deleted according to actual requirements.
Although the present application has been disclosed in detail with reference to the accompanying drawings, it is to be understood that such description is merely illustrative and is not intended to limit the application of the present application. The scope of the present application is defined by the appended claims and may include various modifications, adaptations, and equivalents of the invention without departing from the scope and spirit of the application.
Claims (4)
1. A front-vehicle distance tracking system for autonomous driving, the system comprising: the system comprises a data acquisition unit (10), a vehicle detection unit (20), a coordinate positioning unit (30) and a vehicle tracking unit (40);
the data acquisition unit (10) is used for extracting an image sequence at equal time intervals from the camera and simultaneously recording parameters of the camera, wherein the parameters comprise internal parameters representing the focal length, the projection center, the inclination and the distortion of the camera and external parameters representing the position translation and the position rotation of the camera;
the vehicle detection unit (20) comprises a plurality of vehicle detection modules, wherein each vehicle detection module is responsible for processing a single image; the vehicle detection module identifies vehicles in the images by using a classifier constructed by a deep neural network model, and fits a contour outline of the vehicle by using a regressor, so that the front vehicle is positioned on a single image;
the coordinate locating unit (30) comprises a plurality of coordinate locating modules, wherein each coordinate locating module is responsible for processing a single vehicle contour outline; for each vehicle contour frame line, the coordinate positioning module transforms the contour frame line pixel coordinates of the front vehicle into the vehicle body coordinates of the vehicle-mounted camera through a geometric transformation method according to the pixels of the four vertexes of the frame line and the camera parameters obtained by the data acquisition unit (10), so that the spatial position of the front vehicle relative to the vehicle is determined;
the coordinate positioning unit (30) is configured to:
according to the coordinates of the vehicle in the image, which are obtained by the vehicle detection unit (20), the coordinates of the vehicle in a world coordinate system are obtained through a camera coordinate-world coordinate transformation formula, wherein the coordinate transformation formula is as follows:
whereinIs a three-dimensional coordinate under a world coordinate system,two-dimensional coordinates of the bottom center point of the outline frame line of the vehicle detected in the image,andrespectively a rotation matrix and a translation matrix in the external parameter matrix of the cameraIs a matrix of parameters within the camera that,,are all 0; wherein the focal length of the camera in the x-axis direction, the focal length in the y-axis direction, the optical center in the x-axis direction, and the optical center in the y-axis direction are respectivelyIs the depth of field; wherein,are known quantities that can be acquired from the data acquisition unit (10),is the unknown quantity to be solved;
the calculation of the coordinate positioning unit is divided into 2 steps:
step 1, estimating the depth of field, namely taking a point on the ground of a horizontal line at the bottom of a vehicle frame line on an image, wherein the two-dimensional coordinate of the point in the image isZ-direction component of world coordinate of the pointAs is known, the depth of field of the two pixel points is the same as the ground point and the bottom center point of the vehicle frame line are positioned on the same horizontal plane of the imageSimilarly, a first system of linear equations (e31) can be derived therefrom
Step 2, solving the step according to the world coordinates, and using a coordinate transformation formula to determine the bottom center point of the vehicle frame lineObtain a second system of linear equations (e32)
By combining the two equations, the unknown amount of depth in the equation set (e32) may be cancelled out with the known amount in the equation set (e31)Thereby obtaining the world coordinates of the bottom center point of the vehicle frame line;
The vehicle tracking unit (40) is responsible for processing and identifying the same vehicle in the multi-frame images to form the driving track of each vehicle; the vehicle tracking unit (40) identifies the same vehicle appearing in different pictures, gives a unique id number to the vehicle, and connects the driving tracks in series, thereby realizing the tracking of the front vehicle;
the vehicle tracking unit (40) comprises a distance calculation module and a distance matching module;
the distance calculation module calculates the pixel distance from the centers of a plurality of contour frame lines of a first frame to a plurality of contour frame lines of a second frame between two frames of images according to the contour frame lines of each vehicle of the two frames of images, and a group of inter-frame matching contour frame lines with the closest distance are regarded as the contour frame lines of the two frames of images of the same vehicle;
the distance matching module preferentially matches the vehicle contour frame line closest to the two adjacent images in the image sequence according to a closest matching principle, a group of adjacent vehicle contour frame lines obtained through matching are endowed with the same vehicle ID mark, then the matched vehicle contour frame lines are removed, and the matching is continued in the remaining vehicle contour frame lines in the two adjacent images according to the closest principle until all the vehicle contour frame lines in one image in the two adjacent images are completely matched.
2. The preceding vehicle distance tracking system for autonomous driving according to claim 1, characterized in that the vehicle detection unit (20) includes a candidate region generation module, a vehicle discrimination module, and a contour outline regression module;
the candidate region generation module stores a plurality of anchor points with different sizes, and each anchor point is a rectangular region composed of a plurality of pixels;
the vehicle distinguishing module inputs the candidate area which is generated by the candidate area generating module and possibly contains the vehicle and outputs information whether the candidate area contains the vehicle or not;
and the outline regression module finely adjusts the outline of the vehicle in the image on the basis of the candidate region coordinates according to the mask region convolution neural network regressor.
3. The front vehicle distance tracking system for autonomous driving of claim 2,
according to the vehicle detection unit, calculating a contour outline of the vehicle in the image by using a mask region convolution neural network; the calculation of the mask area convolution neural network is divided into three steps:
step 1, extracting alternative areas, traversing the whole image from left to right and from top to bottom by a plurality of preset anchor points with different sizes, and calculating the position of a rectangular block which can be used as a vehicle outline;
step 2, carrying out regional object classification, extracting visual features of the regions by using a convolutional neural network for each candidate region, and judging the category of the objects in the regions by using a multilayer perceptron through the visual features;
and 3, finishing the coordinates of the contour outline, and regressing the offset of the contour outline of the candidate area relative to the contour outline of the detection target by using a neural network for each candidate area so as to further fit the contour outline of the detection target.
4. A method for tracking using the system for tracking a distance to a leading vehicle for automatic driving of claim 1, comprising the steps of:
step 1: calibrating a camera, namely calibrating the position parameters and the optical parameters of the vehicle-mounted camera and recording the position parameters and the optical parameters in system data acquisition software;
the camera position parameters comprise the distances from the fixed position of the camera to the vehicle head, the vehicle chassis and two sides of the vehicle body and the three-dimensional angle of the camera relative to the vehicle chassis;
step 2: the method comprises the steps of identifying and positioning a front vehicle, wherein the identification and positioning of the front vehicle are realized through a vehicle detection unit (20) and a coordinate positioning unit (30);
and step 3: the method comprises the steps of tracking the track of a front vehicle, identifying vehicles which repeatedly appear in an image sequence through a vehicle tracking and positioning unit, giving different unique IDs to all different vehicles appearing in the image sequence as differences, and outputting the track sequence of each vehicle to an XML file.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910953010.XA CN110717445B (en) | 2019-10-09 | 2019-10-09 | Front vehicle distance tracking system and method for automatic driving |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910953010.XA CN110717445B (en) | 2019-10-09 | 2019-10-09 | Front vehicle distance tracking system and method for automatic driving |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110717445A CN110717445A (en) | 2020-01-21 |
CN110717445B true CN110717445B (en) | 2022-08-23 |
Family
ID=69212304
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910953010.XA Active CN110717445B (en) | 2019-10-09 | 2019-10-09 | Front vehicle distance tracking system and method for automatic driving |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110717445B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111950395B (en) * | 2020-07-24 | 2023-11-24 | 中南大学 | Vehicle identification method and device and computer storage medium |
CN112346039A (en) * | 2020-09-15 | 2021-02-09 | 深圳市点创科技有限公司 | Monocular camera-based vehicle distance early warning method, electronic device and storage medium |
CN112733778B (en) * | 2021-01-18 | 2021-08-10 | 国汽智控(北京)科技有限公司 | Vehicle front guide determination method and device and computer equipment |
CN113160187B (en) * | 2021-04-27 | 2022-02-15 | 圣名科技(广州)有限责任公司 | Fault detection method and device of equipment |
CN113657265B (en) * | 2021-08-16 | 2023-10-10 | 长安大学 | Vehicle distance detection method, system, equipment and medium |
CN115100839B (en) * | 2022-07-27 | 2022-11-01 | 苏州琅日晴传媒科技有限公司 | Monitoring video measured data analysis safety early warning system |
CN118334619A (en) * | 2024-04-11 | 2024-07-12 | 清华大学 | Intelligent networking bus multi-vehicle formation sensing method and device based on monocular camera |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108230393A (en) * | 2016-12-14 | 2018-06-29 | 贵港市瑞成科技有限公司 | A kind of distance measuring method of intelligent vehicle forward vehicle |
CN108259764A (en) * | 2018-03-27 | 2018-07-06 | 百度在线网络技术(北京)有限公司 | Video camera, image processing method and device applied to video camera |
CN109325467A (en) * | 2018-10-18 | 2019-02-12 | 广州云从人工智能技术有限公司 | A kind of wireless vehicle tracking based on video detection result |
KR101986592B1 (en) * | 2019-04-22 | 2019-06-10 | 주식회사 펜타게이트 | Recognition method of license plate number using anchor box and cnn and apparatus using thereof |
CN110096960A (en) * | 2019-04-03 | 2019-08-06 | 罗克佳华科技集团股份有限公司 | Object detection method and device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3621850B1 (en) * | 2017-06-05 | 2023-08-30 | Adasky, Ltd. | Shutterless far infrared (fir) camera for automotive safety and driving systems |
US10474988B2 (en) * | 2017-08-07 | 2019-11-12 | Standard Cognition, Corp. | Predicting inventory events using foreground/background processing |
-
2019
- 2019-10-09 CN CN201910953010.XA patent/CN110717445B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108230393A (en) * | 2016-12-14 | 2018-06-29 | 贵港市瑞成科技有限公司 | A kind of distance measuring method of intelligent vehicle forward vehicle |
CN108259764A (en) * | 2018-03-27 | 2018-07-06 | 百度在线网络技术(北京)有限公司 | Video camera, image processing method and device applied to video camera |
CN109325467A (en) * | 2018-10-18 | 2019-02-12 | 广州云从人工智能技术有限公司 | A kind of wireless vehicle tracking based on video detection result |
CN110096960A (en) * | 2019-04-03 | 2019-08-06 | 罗克佳华科技集团股份有限公司 | Object detection method and device |
KR101986592B1 (en) * | 2019-04-22 | 2019-06-10 | 주식회사 펜타게이트 | Recognition method of license plate number using anchor box and cnn and apparatus using thereof |
Non-Patent Citations (1)
Title |
---|
基于改进的Mask R-CNN的车辆识别及检测;白宝林;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180815;正文第16-34页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110717445A (en) | 2020-01-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110717445B (en) | Front vehicle distance tracking system and method for automatic driving | |
CN110441791B (en) | Ground obstacle detection method based on forward-leaning 2D laser radar | |
JP5926228B2 (en) | Depth detection method and system for autonomous vehicles | |
CN111448478B (en) | System and method for correcting high-definition maps based on obstacle detection | |
US9846812B2 (en) | Image recognition system for a vehicle and corresponding method | |
JP4612635B2 (en) | Moving object detection using computer vision adaptable to low illumination depth | |
US20050232463A1 (en) | Method and apparatus for detecting a presence prior to collision | |
US7321669B2 (en) | Method and apparatus for refining target position and size estimates using image and depth data | |
JP2015181042A (en) | detection and tracking of moving objects | |
EP2960858B1 (en) | Sensor system for determining distance information based on stereoscopic images | |
WO2004114202A1 (en) | Vehicular vision system | |
JP2006053756A (en) | Object detector | |
KR20210090384A (en) | Method and Apparatus for Detecting 3D Object Using Camera and Lidar Sensor | |
JP6816401B2 (en) | Image processing device, imaging device, mobile device control system, image processing method, and program | |
WO2018179281A1 (en) | Object detection device and vehicle | |
CN111723778B (en) | Vehicle distance measuring system and method based on MobileNet-SSD | |
CN114419098A (en) | Moving target trajectory prediction method and device based on visual transformation | |
WO2018202464A1 (en) | Calibration of a vehicle camera system in vehicle longitudinal direction or vehicle trans-verse direction | |
CN113781562A (en) | Lane line virtual and real registration and self-vehicle positioning method based on road model | |
Lefebvre et al. | Vehicle detection and tracking using mean shift segmentation on semi-dense disparity maps | |
JP4344860B2 (en) | Road plan area and obstacle detection method using stereo image | |
JP5539250B2 (en) | Approaching object detection device and approaching object detection method | |
US20220309776A1 (en) | Method and system for determining ground level using an artificial neural network | |
CN106709432B (en) | Human head detection counting method based on binocular stereo vision | |
JP2007280387A (en) | Method and device for detecting object movement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |