CN113792634A - Target similarity score calculation method and system based on vehicle-mounted camera - Google Patents

Target similarity score calculation method and system based on vehicle-mounted camera Download PDF

Info

Publication number
CN113792634A
CN113792634A CN202111042603.4A CN202111042603A CN113792634A CN 113792634 A CN113792634 A CN 113792634A CN 202111042603 A CN202111042603 A CN 202111042603A CN 113792634 A CN113792634 A CN 113792634A
Authority
CN
China
Prior art keywords
target
score
vehicle
calculating
similarity score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111042603.4A
Other languages
Chinese (zh)
Other versions
CN113792634B (en
Inventor
邓立凯
朱垠吉
梁义辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yihang Yuanzhi Technology Co Ltd
Original Assignee
Beijing Yihang Yuanzhi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yihang Yuanzhi Technology Co Ltd filed Critical Beijing Yihang Yuanzhi Technology Co Ltd
Priority to CN202111042603.4A priority Critical patent/CN113792634B/en
Publication of CN113792634A publication Critical patent/CN113792634A/en
Application granted granted Critical
Publication of CN113792634B publication Critical patent/CN113792634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The invention relates to a vehicle-mounted camera-based target similarity score calculation method, a multi-target tracking method, a multi-target matching method and system, an electronic device and a computer-readable storage medium. In addition, the problem of matching failure caused by low matching registration rate or low appearance information discrimination due to unstable target appearance information influenced by a plurality of external factors in the prior art is effectively solved.

Description

Target similarity score calculation method and system based on vehicle-mounted camera
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a target similarity score calculation method, a multi-target tracking method, a multi-target matching method and system, electronic equipment and a computer readable storage medium based on a vehicle-mounted camera.
Background
With the rapid development of the automatic driving technology and the continuous improvement of the processing capability of mobile hardware, computer vision is continuously applied in the automatic driving field and gradually plays an increasingly important role.
In order to increase the perception capability of the vehicle to the surrounding environment, more and more cameras are mounted at different positions of the vehicle body to acquire targets and environments at different angles and ranges as much as possible, so that the vehicle is helped to realize functions of automatic lane changing, automatic vehicle following, automatic obstacle avoidance and the like. One of the problems with multiple cameras is how to solve matching and tracking between targets, where matching refers to the need to spatially determine the uniqueness of the target identities observed by multiple cameras when the same target appears in the fields of view of multiple cameras at the same time, and tracking refers to the uniqueness of the target in time domain, i.e. the identities at different times. Therefore, in a multi-target matching and tracking system under multiple cameras, how to perform similarity matching on targets in time domain and space domain is becoming one of the key technologies in the field of automatic driving.
In the related technology, a detected target of each path of current video frame image is matched with a detected target of a previous video frame image in the path, the target in each path of video frame image is tracked, and two targets with higher appearance similarity and similar three-dimensional positions are determined to be the same target by calculating the difference of three-dimensional position coordinates and appearance similarity, so that the aim of matching the targets in multiple paths of video frames is fulfilled.
According to the method, the three-dimensional coordinates of the target are calculated through the projection matrix, the distance measurement accuracy is influenced by a target detection result, a camera calibration result and the actual distance of the target, a large error exists, and effective matching is difficult to perform. When a large-volume object such as a vehicle enters an overlapping area of two cameras, the object does not completely appear in each camera, and the method does not provide an effective matching method for the cross-camera object which does not completely appear in a picture. Influenced by the installation angles of different cameras, illumination changes and shielding conditions, the appearances of the same target in different cameras may have great difference, and effective matching is difficult to perform.
In another related technology, a plurality of cameras are adopted to synchronously acquire scene images from different fixed angles, and observation information of a target under different postures is acquired; detecting the target in each image by adopting a target detector based on a deep convolutional network, and outputting a target detection result; extracting a global feature map of each image by adopting a deep convolutional neural network, and extracting a local feature map of a corresponding position of a target on the global feature map according to a target detection result to obtain an appearance vector of the target; coding a camera to generate a view vector containing observation view information; generating a position vector of the target according to the position of the target detection frame corresponding to the target in the image coordinate system; carrying out vector fusion on the appearance vector, the visual angle vector and the position vector, and generating a target expression vector after transformation; training the deep convolutional neural network by adopting a triple data set, and learning a target expression vector for re-recognition; in the training process, a triple data set is generated and updated by adopting a method combining off-line mining and on-line mining; and clustering the learned target expression vectors corresponding to the targets in each image by adopting a constraint hierarchical clustering method to realize cross-camera target re-identification.
The method for training and clustering the fusion data of the appearance vector, the visual angle vector and the position vector by adopting the neural network needs to depend on a large amount of data labels and consumes time and system resources in reasoning comparison, and the requirements on precision and instantaneity are difficult to meet at the same time.
In another related technique, in a multi-camera system, each camera maintains target information of all neighboring cameras around which cross-camera motion is likely to occur, and predicts a likely motion direction of a target according to a historical motion trajectory of each target, i.e., determines the next camera where the target will appear. And matching and tracking the targets by calculating the matching degree between the two camera targets in space, time and color.
The method proposes to calculate the matching degree through space, time and color information, but does not give a detailed calculation method, and color information is easily affected by illumination and target posture change in another related art, causing target matching failure.
In summary, the currently mainstream matching method mainly utilizes appearance information (including color, texture, shape, etc.) of the target, and can express the characteristics of the target to a great extent. However, the appearance of the target is often affected by many external factors, such as illumination, posture, occlusion, etc., and thus, the method has great instability. In addition, for some targets (like types of vehicles), the appearances of individuals are similar and have little difference, which often makes the appearance information useless in distinguishing them, so that matching tracking by the appearance model is difficult or impossible.
Disclosure of Invention
In view of the above, the present invention provides a method for calculating a similarity score of a target based on a vehicle-mounted camera, a multi-target tracking method, a multi-target matching method and system, an electronic device, and a computer-readable storage medium, so as to solve the problem of poor matching effect caused by similarity matching between targets using appearance information of the targets in the prior art.
According to a first aspect of the embodiments of the present invention, there is provided a target similarity score calculation method based on a vehicle-mounted camera, including:
determining a first target and a second target to be matched, wherein the first target and the second target exist in different camera images;
respectively calculating constraint scores of the first target and the second target belonging to the same lane, and intersection ratio scores of the first target and the second target;
and determining the similarity score of the first target and the second target according to the constraint score and the intersection ratio score.
Preferably, the determining a first target and a second target to be matched includes:
if multi-target tracking in the time domain is carried out, determining targets in different camera images at adjacent moments as a first target and a second target; and/or the presence of a gas in the gas,
and if multi-camera multi-target matching is carried out, determining the targets appearing in the overlapped visual field of the cameras at the same moment as the first target and the second target.
Preferably, the calculating the constraint score that the first target and the second target belong to the same lane includes:
calculating a first probability that the first target belongs to any lane;
calculating a second probability that the second target belongs to the same lane;
and calculating a constraint score of the first target and the second target belonging to the same lane according to the first probability and the second probability.
Preferably, the calculating a first probability that the first target belongs to any lane includes:
detecting a 3D frame and a lane line of a first target from a camera image according to a preset detection model;
mapping the grounding point and the lane line of the 3D frame to a vehicle coordinate system from an image coordinate system to obtain the pose of the first target under the vehicle coordinate; the poses are coordinates of at least three grounding points of the 3D frame;
and calculating the first probability according to the pose of the first target and the lane line coordinate.
Preferably, the calculating a second probability that the second target belongs to the same lane includes:
if multi-target tracking in the time domain is carried out, estimating the pose of the second target at the current moment according to the tracking result of the second target at the previous moment, and calculating the second probability according to the estimated pose;
and/or the presence of a gas in the gas,
and if multi-camera multi-target matching is carried out, the second probability is the same as the first probability calculation method.
Preferably, the calculating the intersection ratio score of the first target and the second target comprises:
respectively determining the corner coordinates of the first target and the corner coordinates of the second target in the top view at the current moment;
calculating the area of a triangle formed by each corner point of the first target and the origin of the first vehicle coordinate system, and determining the triangle corresponding to the maximum area as a first maximum triangle;
calculating the area of the triangle formed by each corner point of the second target and the origin of the second vehicle coordinate system, and determining the triangle corresponding to the maximum area as a second maximum triangle;
and taking the intersection ratio of the first maximum triangle and the second maximum triangle as the intersection ratio score of the first target and the second target.
Preferably, the determining the corner coordinates of the first object and the corner coordinates of the second object in the top view at the current time respectively includes:
if multi-target tracking in the time domain is carried out, calculating the corner point coordinates of the first target under the top view at the current moment according to the pose of the first target at the current moment; estimating the corner point coordinates of the second target under the top view at the current moment according to the pose of the second target at the previous moment and the vehicle motion model; and/or the presence of a gas in the gas,
if multi-camera multi-target matching is carried out, calculating the corner point coordinates of the first target under the top view at the current moment according to the pose of the first target at the current moment; and calculating the corner point coordinates of the second target under the top view at the current moment according to the pose of the second target at the current moment.
Preferably, the method further comprises:
if multi-target tracking in the time domain is carried out, determining the geometric center point of the current vehicle at the current moment as the origin of a first vehicle coordinate system; determining the geometric center point of the current vehicle at the current moment estimated according to the geometric center point of the current vehicle at the previous moment and the vehicle motion model as the origin of a second vehicle coordinate system; and/or the presence of a gas in the gas,
if multi-camera multi-target matching is carried out, the origin of the first vehicle coordinate system is the same as the origin of the second vehicle coordinate system, and the origins are the geometric center points of the current vehicle at the current moment.
Preferably, the determining the similarity score of the first target and the second target according to the constraint score and the cross-over ratio score includes:
and according to a preset weight, carrying out weighted summation on the constraint score and the intersection ratio score to obtain a similarity score of the first target and the second target.
According to a second aspect of the embodiments of the present invention, there is provided a multi-target tracking method based on a vehicle-mounted camera, including:
the object similarity score calculation method based on the vehicle-mounted camera is described.
Preferably, the method further comprises:
taking the first target as a reference target and the second target as a target to be matched;
calculating a similarity score for the first target and all second targets;
and judging whether the similarity score is larger than a threshold value, if so, storing the similarity score into a score result list, otherwise, judging that the first target is not matched with the current second target, and discarding the current similarity score.
And sorting the similarity scores in the score result list, and taking a second target corresponding to the highest similarity score as a tracking result.
According to a third aspect of the embodiments of the present invention, there is provided a multi-target matching method based on a vehicle-mounted camera, including:
the object similarity score calculation method based on the vehicle-mounted camera is described.
Preferably, the method further comprises:
taking the first target as a reference target and the second target as a target to be matched;
calculating a similarity score for the first target and all second targets;
and judging whether the similarity score is larger than a threshold value, if so, storing the similarity score into a score result list, otherwise, judging that the first target is not matched with the current second target, and discarding the current similarity score.
And sorting the similarity scores in the score result list, and taking a second target corresponding to the highest similarity score as a matching result.
According to a fourth aspect of the embodiments of the present invention, there is provided a target similarity score calculation system based on an in-vehicle camera, including:
a determining module for determining a first target and a second target to be matched, the first target and the second target being present in different camera images;
the calculation module is used for calculating the constraint scores of the first target and the second target belonging to the same lane and the intersection ratio score of the first target and the second target;
the determining module is further configured to determine a similarity score of the first target and the second target according to the constraint score and the cross-over ratio score.
According to a fifth aspect of an embodiment of the present invention, there is provided an electronic apparatus, including:
the system comprises a communication module, a processor and a memory, wherein the memory stores program instructions;
the processor is configured to execute program instructions stored in the memory to perform the above-described method.
According to a sixth aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon a rewritable computer program;
when the computer program is run on a computer device, it causes the computer device to perform the method described above.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects:
compared with the prior art, the technical scheme provided by the invention has the advantages that the lane information is considered, the spatial relative relation between the first target and the second target to be matched is restrained, the calculation accuracy of the similarity score can be effectively improved, the matching accuracy of the targets is improved, and the positioning error is reduced.
Moreover, due to the technical scheme provided by the invention, the consideration of the target appearance information is abandoned, so that the problem of matching failure caused by low matching registration rate or low appearance information distinction degree due to unstable target appearance information influenced by a plurality of external factors in the prior art is effectively solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow diagram illustrating a method for vehicle camera-based target similarity score calculation according to an exemplary embodiment;
FIG. 2 is a schematic diagram illustrating the effect of detecting a 3D box and lane lines of a target from a camera image through a preset detection algorithm according to an exemplary embodiment;
FIG. 3 is a schematic diagram illustrating the effect of mapping the grounding points and lane lines of a 3D frame to a vehicle coordinate system according to an exemplary embodiment;
FIG. 4 is a schematic diagram illustrating the presence of targets between lane lines in accordance with an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating a change in position of a target from time t1 to time t2, according to an exemplary embodiment;
FIG. 6 is a schematic diagram illustrating a first target and second target quadrature ratio calculation according to an exemplary embodiment;
FIG. 7 is a flow diagram illustrating a method for multi-target tracking based on onboard cameras in accordance with an exemplary embodiment;
FIG. 8 is a flow diagram illustrating a vehicle camera-based multi-target matching method in accordance with an exemplary embodiment;
FIG. 9 is a schematic block diagram illustrating an in-vehicle camera-based target similarity score calculation system in accordance with an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
It should be noted that, the "current vehicle" mentioned in the embodiments of the present invention refers to a "vehicle in which the onboard camera is located". Preferably, the first and second targets are defined as vehicles, but in some special application scenarios, the first and second targets may also be defined as various static obstacles and/or dynamic obstacles on the lane, such as greening trees, signboards, mud pits, stones, animals, and the like.
Example one
Fig. 1 is a flowchart illustrating a target similarity score calculation method based on an in-vehicle camera according to an exemplary embodiment, as shown in fig. 1, the method including:
step S11, determining a first target and a second target to be matched, wherein the first target and the second target exist in different camera images;
step S12, respectively calculating constraint scores of the first target and the second target belonging to the same lane, and intersection ratio scores of the first target and the second target;
and step S13, determining the similarity score of the first target and the second target according to the constraint score and the intersection ratio score.
It should be noted that application scenarios to which the technical solution provided by this embodiment is applicable include, but are not limited to: automatic driving, assisted driving, etc. of the vehicle. The technical scheme provided by the embodiment can be loaded in a central control system of the current vehicle for use and can also be loaded in electronic equipment for use when in actual use; the electronic devices include, but are not limited to: vehicle-mounted computer and external computer equipment.
In some application scenarios, the first object and the second object may be present in camera images taken at different times; in some other application scenarios, the first object and the second object may also be present in camera images taken by different cameras at the same time.
The number of the vehicle-mounted cameras is at least one.
When the number of the vehicle-mounted cameras is only one, the technical scheme provided by the embodiment can be used for calculating the similarity score of each target in different camera images at the same time, so that the similarity score calculation among the targets in the time domain is realized;
when the number of the vehicle-mounted cameras is at least two, the technical scheme provided by the embodiment can be used for calculating the similarity scores of all targets in different camera images at the same time to realize the similarity score calculation among the targets in the spatial domain, and can also be used for calculating the similarity scores of all targets in the camera images at different times to realize the similarity score calculation among the targets in the time domain; the time interval between the different time instants can be set according to the user requirement, for example, can be set to 10ms, 20ms, 1s, and the like.
In a specific practice, the step S11 of "determining a first target and a second target to be matched" includes:
if multi-target tracking in the time domain is carried out, determining targets in camera images at adjacent moments as a first target and a second target; and/or the presence of a gas in the gas,
and if multi-camera multi-target matching is carried out, determining the targets appearing in the overlapped visual field of the cameras at the same moment as the first target and the second target.
Assume that there are A, B, C, D on-board cameras of the current vehicle, the current time is 8:00, and the previous time is 7: 59.
If multi-target tracking in the time domain is carried out, the method comprises the following steps:
acquiring camera images shot by each vehicle-mounted camera at 7:59 and 8:00 respectively;
taking a target detected from a camera image shot at 8:00 as a first target, and performing pose prediction at the time of 8:00 on the target detected from the camera image shot at 7:59 to obtain a predicted target as a second target; randomly selecting a first target as a reference target, taking a second target as a target to be matched, and calculating similarity scores of the first target and all the second targets; the calculation is then repeated with the next first target in turn.
If multi-camera multi-target matching is carried out, the method comprises the following steps:
acquiring camera images shot by each vehicle-mounted camera at 8:00 hours;
randomly selecting one of the targets detected from the camera images shot at the speed of 8:00 as a first target, taking the targets in other camera images as second targets, taking the first target as a reference target, taking the second target as a target to be matched, and calculating the similarity scores of the first target and all the second targets; the calculation is then repeated with the next first target in turn.
In order to facilitate understanding of the technical solution provided by this embodiment, the following explanation is provided for calculating the constraint scores of the first target and the second target belonging to the same lane and the intersection ratio score of the first target and the second target in step 12, respectively:
first, calculating constraint scores of the first target and the second target belonging to the same lane
1. Calculating a first probability that the first target belongs to any lane, comprising:
(1) all the vehicle-mounted cameras are calibrated, and an internal parameter matrix M1 and an external parameter matrix M2 of each camera are obtained to determine a projection matrix P of each camera, wherein the projection matrix P is M1M 2.
It can be understood that the camera calibration is a process of converting the vehicle coordinate system into the camera coordinate system, and then converting the camera coordinate system into the image coordinate system, that is, a process of obtaining the final projection matrix P.
(2) The 3D frame and the lane line of the first object are detected from the camera image according to a preset detection model (e.g., a convolutional neural network model) (see fig. 2).
(3) Mapping the grounding points (such as points P1, P2 and P3 in FIG. 2) and the lane lines of the 3D frame to a vehicle coordinate system from the image coordinate system through the projection matrix to obtain the pose of the first target in the vehicle coordinate system; the pose is the coordinates of at least three grounding points of the 3D frame:
assuming that the coordinates of any point in the image coordinate system are (X, Y) and the coordinates in the vehicle coordinate system are (X, Y, Z), the camera imaging principle is adopted
Figure BDA0003249908120000101
And mapping the grounding point and the lane line of the 3D frame from the image coordinate system to the vehicle coordinate system to obtain the pose of the first target (see fig. 3).
It can be understood that there are various descriptions of the pose, and the technical solution of this embodiment uses coordinates of at least three grounding points of the 3D frame. From the coordinates of the three grounding points, other pose information of the target can be calculated, for example: size (width and length), heading angle, geometric center point coordinates, and the like.
In specific practice, no matter which pose information of the target is adopted, the invention is within the protection scope of the invention as long as the invention concept of the invention can be realized.
An example of calculating other pose information of the target from the coordinates of the three grounding points is given below. It should be noted that the following calculation formula is only an example, and it is within the scope of the present invention to adopt other formulas or the following formula variants to realize the calculation of the target pose information.
Taking grounding points as points P1, P2 and P3 in fig. 2 as an example, the grounding points can be calculated by the following formula:
target width w:
Figure BDA0003249908120000111
target length l:
Figure BDA0003249908120000112
geometric center point coordinates of the target:
Figure BDA0003249908120000113
target course angle θ:
Figure BDA0003249908120000114
taking the target as a vehicle as an example, theta is defined as an included angle between the direction of the head of the target vehicle and the positive direction of the X axis of the vehicle coordinate system, and the value range is [0, 2 pi ].
(5) Calculating lane width K according to the coordinates of the lane line in a vehicle coordinate system, wherein K is more than 0;
suppose LjDenotes a lane L which is a lane formed by a lane line j and a lane line j +1jThe width of (A) is:
K=|xj+1-xjl where xj+1、xjRespectively is the abscissa of the j +1 lane line and the j lane line, and j is more than or equal to 0.
(6) And calculating the first probability according to the pose and the lane width of the first target.
Let the first target be T(i,j,k)(i, j, k ═ 0,1, 2.), where i denotes a camera number, j denotes a lane line number, denotes a target ID; referring to fig. 4, it is calculated that the first target belongs to the lane LjThe first probability of (d) is:
Figure BDA0003249908120000115
where Δ x represents the horizontal distance between the geometric center point of the target and the nearest lane line, and K represents lane LjIs measured.
2. Calculating a second probability that the second target belongs to the same lane, comprising:
and if multi-camera multi-target matching is carried out, the second probability is the same as the first probability calculation method.
If multi-target tracking in the time domain is carried out, calculating a second probability that a second target belongs to the same lane, wherein the second probability comprises the following steps:
(1) and estimating the pose of the second target at the current moment according to the tracking result of the second target at the previous moment.
Inputting the tracking result (including target position, size and course angle) of the second target at the last moment into Kalman filteringIn the device, linear velocities [ v ] of the second target in the X direction and the Y direction under the vehicle coordinate system are obtainedx,vy]While computing the first time t1And the current time t2Time difference d oft=t2-t1
Assuming the geometric center point coordinate of the second target at t1The time is (x, y), then according to the vehicle motion model
Figure BDA0003249908120000121
The geometric center point of the second target at t can be calculated2Predicted coordinates (x ', y') of the time instant; similarly, other points of the second target at t may be calculated2Predicted coordinates of time to obtain t2The predicted pose of the second target at time instant (see fig. 5).
(2) And calculating a second probability that the second target belongs to the same lane according to the estimated pose.
Calculating the horizontal distance between the geometric center point and the closest lane line according to the predicted coordinate of the geometric center point of the second target; the lane width can directly take the calculation result of the previous time, so that a second probability that the second target belongs to the same lane can be calculated by referring to the calculation formula of the first probability.
3. And calculating a constraint score of the first target and the second target belonging to the same lane according to the first probability and the second probability.
Whether multi-target tracking in the time domain or multi-camera multi-target matching is carried out, the first probability P is known(i,j,k)And a second probability P(i′,j′,k′)On the premise of (1), calculating a first target T(i,j,k)And a second target T(i′,j′,k′)The constraint scores belonging to the same lane are all:
referring to fig. 4, define: when Δ x<At w/2, the first target T(i,j,k)Approaching the lane line j; when Δ x>K-w/2, first target T(i,j,k)Is close to a lane line j +1 and has a probability of 1-P(i,j,k)(ii) a When w/2<=Δx<When K-w/2, the first target T(i,j,k)Belong to LjLanes and probability of P(i,j,k)
When Δ x<At w/2, the second target T(i′,j′,k′)Approaching the lane line j; when Δ x>K-w/2, second target T(i′,j′,k′)Is close to a lane line j +1 and has a probability of 1-P(i′,j′,k′)(ii) a When w/2<=Δx<When K-w/2, the second target T(i′,j′,k′)Belong to LjLanes and probability of P(i′,j′,k′)
The first target T to be matched(i,j,k)And a second target T(i′,j′,k′)The constraint scores belonging to the same lane are:
Figure BDA0003249908120000131
second, calculating the cross-over ratio score of the first target and the second target
1. The corner coordinates of the first object and the corner coordinates of the second object in the top view at the current moment are determined.
No matter multi-target tracking in a time domain is carried out or multi-camera multi-target matching is carried out, the corner point coordinates of the first target under the top view at the current moment are calculated according to the pose of the first target at the current moment.
Referring to fig. 3, the coordinates of the points P1, P2 and P3 are known, and the coordinates of the point P4 are restored from the coordinates of the points P1, P2 and P3.
If multi-target tracking in the time domain is carried out, estimating the corner point coordinates of the second target under the top view at the current moment according to the pose of the second target at the previous moment:
assuming the vehicle coordinate system, the second target is at the last time t1Has angular point coordinates of (x, y) according to a vehicle motion model
Figure BDA0003249908120000132
It can be obtained that the second target is at t2The corner point coordinates (x ', y') of the moment; wherein, [ v ]x,vy]Linear velocities of the second target in the X-direction and Y-direction, respectively, by shifting the previous oneAnd inputting the tracking result of the second target at the moment into a Kalman filter for estimation.
If multi-camera multi-target matching is carried out, the determination method of the corner point coordinates of the first target and the determination method of the corner point coordinates of the second target under the top view at the current moment are the same.
2. Calculating the area of a triangle formed by each corner point of the first target and the origin of the first vehicle coordinate system, and determining the triangle corresponding to the maximum area as a first maximum triangle;
and calculating the area of the triangle formed by each corner point of the second target and the origin of the second vehicle coordinate system, and determining the triangle corresponding to the maximum area as a second maximum triangle.
If multi-target tracking in the time domain is carried out, determining the geometric center point of the current vehicle at the current moment as the origin of a first vehicle coordinate system; determining the geometric center point of the current vehicle at the current moment estimated according to the geometric center point coordinates of the current vehicle at the previous moment and the vehicle motion model as the origin of a second vehicle coordinate system:
assume that the current time is t2The last time is t1Obtaining t2The linear velocity v and the angular velocity w of the current vehicle at the moment; assume that the current vehicle is at t1The coordinates of the geometric center point of the moment are (x, y) according to the motion model of the vehicle
Figure BDA0003249908120000141
Calculating the geometric center point (x, y) at t2Relative time t1Displacement of time [ dx, dy],[dx,dy]The corresponding coordinate point is the origin of the second vehicle coordinate system.
Referring to FIG. 5, the current vehicle is at t1The geometric center point of the moment is O, the corner points of the second target are P1, P2, P3 and P4, and the estimated current vehicle is at t2The geometrical center point of the moment is O ', and the corner points of the second object are P1 ', P2 ', P3 ', P4 '.
Since the coordinates of the corner points P1 ', P2', P3 ', P4' of the second object are known, the respective corner points P1 ', P2', P3 ', P4' are associated with the original of the second vehicle coordinate systemThe area of the triangle formed by the point O' can also be determined. The triangle delta O 'P formed by the angular points P1', P3 'and the origin O' of the second vehicle coordinate system1’P3'the area is largest (see FIG. 5), then the triangle Δ O' P1’P3' is the second largest triangle.
3. And taking the intersection ratio of the first maximum triangle and the second maximum triangle as the intersection ratio score of the first target and the second target.
Taking multi-camera multi-target matching as an example, referring to fig. 6, the origin of the first vehicle coordinate system is the same as the origin of the second vehicle coordinate system, and both are the geometric center point O of the current vehicle at the current time.
Suppose a corner point P of a first object1、P3Triangle Delta OP formed by the above mentioned and the origin O of the vehicle coordinate system1P3Has the largest area, the corner point P of the second object1’、P2' triangle Delta OP formed as described above with the origin O of the vehicle coordinate system1’P2' maximum area, then cross-over score SOComprises the following steps:
Figure BDA0003249908120000151
in step 13, determining a similarity score of the first target and the second target according to the constraint score and the cross-over ratio score, specifically:
according to a preset weight delta, scoring the constraint performance SLCross-over score SOAnd carrying out weighted summation to obtain the similarity score of the first target and the second target:
S=δ·SL+(1-δ)·SOwherein, delta represents the preset weight and the value range is [0,1 ]]。
It can be understood that, according to the technical scheme provided by this embodiment, the constraint score and the intersection ratio score of the first target and the second target that belong to the same lane are calculated respectively, and the similarity score of the first target and the second target is determined.
Moreover, due to the technical scheme provided by the embodiment, consideration of the target appearance information is abandoned, so that the problem of matching failure caused by low matching registration rate or low appearance information distinction degree due to instability of the target appearance information under the influence of a plurality of external factors in the prior art is effectively solved.
Further, as the target is incomplete when appearing in the camera view, the preset detection model can also mark a rough 3D frame on the camera image for subsequent similarity calculation, the problems of target missing matching and mismatching caused by the incomplete target appearing in the camera view in the prior art are solved, the target matching accuracy is improved, and the user experience is good and the satisfaction is high.
Example two
According to an exemplary embodiment, a multi-target tracking method based on a vehicle-mounted camera is shown, and the method comprises the following steps: the method for calculating the target similarity score based on the vehicle-mounted camera in the first embodiment.
Fig. 7 is a flowchart illustrating a multi-target tracking method based on a vehicle-mounted camera according to an exemplary embodiment, as shown in fig. 7, the method including:
step S21, calibrating the vehicle-mounted camera to obtain a projection matrix of the camera;
step S22, detecting a 3D frame and a lane line of a first target from a camera image according to a preset detection model; mapping the grounding point and the lane line of the 3D frame to a vehicle coordinate system through the projection matrix;
step S23, determining the targets in the different camera images at the adjacent time as a first target and a second target respectively;
step S24, calculating a first probability that the first target belongs to any lane and a first maximum triangle formed by an angular point of the first target and an origin of a first vehicle coordinate system according to the pose and lane line coordinates of the first target;
step S25, estimating the poses of the current vehicle and the second target at the current moment according to the tracking result of the second target at the previous moment and the vehicle motion model, and calculating a second maximum triangle formed by the second probability, the corner point of the second target and the origin of a second vehicle coordinate system according to the estimated poses;
step S26, calculating a constraint score of the first target and the second target belonging to the same lane according to the first probability and the second probability; taking the intersection ratio of the first maximum triangle and the second maximum triangle as the intersection ratio score of the first target and the second target;
and step S27, carrying out weighted summation on the constraint score and the intersection ratio score according to preset weight to obtain a similarity score of the first target and the second target.
It should be noted that application scenarios to which the technical solution provided by this embodiment is applicable include, but are not limited to: automatic driving, assisted driving, etc. of the vehicle. The technical scheme provided by the embodiment can be loaded in a central control system of the current vehicle for use and can also be loaded in electronic equipment for use when in actual use; the electronic devices include, but are not limited to: vehicle-mounted computer and external computer equipment.
According to the technical scheme provided by the embodiment, the number of the vehicle-mounted cameras is at least one. The implementation manner of each step in this embodiment can refer to the related description in the first embodiment, and details are not described in this embodiment.
In a specific practice, the technical solution provided in this embodiment may further include:
taking the first target as a reference target and the second target as a target to be matched;
calculating a similarity score for the first target and all second targets;
and judging whether the similarity score is larger than a threshold (the threshold is set according to user requirements or set according to experimental data or historical experience values), if so, storing the similarity score into a score result list, otherwise, judging that the first target is not matched with the current second target, and discarding the current similarity score.
And sorting the similarity scores in the score result list, and taking a second target corresponding to the highest similarity score as a tracking result.
It can be understood that, according to the technical scheme provided by this embodiment, the constraint score and the intersection ratio score of the first target and the second target that belong to the same lane are calculated respectively, and the similarity score of the first target and the second target is determined.
Moreover, due to the technical scheme provided by the embodiment, consideration of the target appearance information is abandoned, so that the problem of matching failure caused by low matching registration rate or low appearance information distinction degree due to instability of the target appearance information under the influence of a plurality of external factors in the prior art is effectively solved.
Further, as the target is incomplete when appearing in the camera view, the preset detection model can also mark a rough 3D frame on the camera image for subsequent similarity calculation, the problems of target missing matching and mismatching caused by the incomplete target appearing in the camera view in the prior art are solved, the target matching accuracy is improved, and the user experience is good and the satisfaction is high.
EXAMPLE III
According to an exemplary embodiment, a multi-target matching method for a vehicle-mounted camera is shown, which includes: the method for calculating the target similarity score based on the vehicle-mounted camera in the first embodiment.
Fig. 8 is a flowchart illustrating a multi-target matching method for a vehicle-mounted camera based on a vehicle-mounted camera according to an exemplary embodiment, as shown in fig. 8, the method including:
step S31, calibrating the vehicle-mounted camera to obtain a projection matrix of the camera;
step S32, calculating the overlapping area of the visual fields of two adjacent vehicle-mounted cameras according to the position and the posture of each vehicle-mounted camera, and marking a boundary line on the camera image;
step S33, detecting a 3D frame and a lane line of a first target from a camera image according to a preset detection model; mapping the grounding point and the lane line of the 3D frame to a vehicle coordinate system through the projection matrix;
step S34, determining the targets appearing in the overlapped field of view of the camera at the same time as a first target and a second target respectively;
step S35, calculating a first probability that the first target belongs to any lane and a first maximum triangle formed by an angular point of the first target and an origin of a first vehicle coordinate system according to the pose and lane line coordinates of the first target;
step S36, calculating a second probability that the second target belongs to any lane and a second maximum triangle formed by the corner point of the second target and the origin of a second vehicle coordinate system according to the pose and the lane line coordinates of the second target;
step S37, calculating a constraint score of the first target and the second target belonging to the same lane according to the first probability and the second probability; taking the intersection ratio of the first maximum triangle and the second maximum triangle as the intersection ratio score of the first target and the second target;
and step S38, carrying out weighted summation on the constraint score and the intersection ratio score according to preset weight to obtain a similarity score of the first target and the second target.
It should be noted that application scenarios to which the technical solution provided by this embodiment is applicable include, but are not limited to: automatic driving, assisted driving, etc. of the vehicle. The technical scheme provided by the embodiment can be loaded in a central control system of the current vehicle for use and can also be loaded in electronic equipment for use when in actual use; the electronic devices include, but are not limited to: vehicle-mounted computer and external computer equipment.
It should be noted that, according to the technical solution provided by this embodiment, the number of the vehicle-mounted cameras is at least two. In step S32, calculating the overlapping area of the fields of view of two adjacent vehicle-mounted cameras according to the position and the posture of each vehicle-mounted camera belongs to the prior art, and this embodiment is not described again.
In step S34, the "target appearing in the camera overlapping view at the current time" is determined, which is generally calculated and determined according to the pose of each vehicle-mounted camera and the pose of the target, and belongs to the prior art, and this embodiment is not described again. The implementation manner of each other step in this embodiment may refer to the related description in the first embodiment, and is not described again in this embodiment.
In a specific practice, the technical solution provided in this embodiment may further include:
taking the first target as a reference target and the second target as a target to be matched;
calculating a similarity score for the first target and all second targets;
and judging whether the similarity score is larger than a threshold (the threshold is set according to user requirements or set according to experimental data or historical experience values), if so, storing the similarity score into a score result list, otherwise, judging that the first target is not matched with the current second target, and discarding the current similarity score.
And sorting the similarity scores in the score result list, and taking a second target corresponding to the highest similarity score as a matching result.
It can be understood that, according to the technical scheme provided by this embodiment, the constraint score and the intersection ratio score of the first target and the second target that belong to the same lane are calculated respectively, and the similarity score of the first target and the second target is determined.
Moreover, due to the technical scheme provided by the embodiment, consideration of the target appearance information is abandoned, so that the problem of matching failure caused by low matching registration rate or low appearance information distinction degree due to instability of the target appearance information under the influence of a plurality of external factors in the prior art is effectively solved.
Further, as the target is incomplete when appearing in the camera view, the preset detection model can also mark a rough 3D frame on the camera image for subsequent similarity calculation, the problems of target missing matching and mismatching caused by the incomplete target appearing in the camera view in the prior art are solved, the target matching accuracy is improved, and the user experience is good and the satisfaction is high.
Example four
Fig. 9 is a schematic block diagram illustrating an on-board camera based object similarity score calculation system 100 according to an exemplary embodiment, the system 100 including, as shown in fig. 9:
a determining module 101, configured to determine a first target and a second target to be matched, where the first target and the second target exist in different camera images;
a calculating module 102, configured to calculate constraint scores that the first target and the second target belong to the same lane, and a merging ratio score of the first target and the second target, respectively;
the determining module 101 is further configured to determine a similarity score between the first target and the second target according to the constraint score and the intersection ratio score.
It should be noted that application scenarios to which the technical solution provided by this embodiment is applicable include, but are not limited to: automatic driving, assisted driving, etc. of the vehicle. The technical scheme provided by the embodiment can be loaded in a central control system of the current vehicle for use and can also be loaded in electronic equipment for use when in actual use; the electronic devices include, but are not limited to: vehicle-mounted computer and external computer equipment.
It should be noted that, as the implementation manner of each module in this embodiment can refer to the related description in the first embodiment, this embodiment is not described again.
It can be understood that, according to the technical scheme provided by this embodiment, the constraint score and the intersection ratio score of the first target and the second target that belong to the same lane are calculated respectively, and the similarity score of the first target and the second target is determined.
Moreover, due to the technical scheme provided by the embodiment, consideration of the target appearance information is abandoned, so that the problem of matching failure caused by low matching registration rate or low appearance information distinction degree due to instability of the target appearance information under the influence of a plurality of external factors in the prior art is effectively solved.
EXAMPLE five
An electronic device is shown according to an example embodiment, comprising:
the system comprises a communication module, a processor and a memory, wherein the memory stores program instructions;
the processor is configured to execute program instructions stored in the memory to perform the method of embodiment one; and/or performing the method of embodiment two; and/or performing the method of embodiment three.
It should be noted that the electronic devices include, but are not limited to: vehicle-mounted computer and external computer equipment. The communication module includes but is not limited to: wired communication modules and wireless communication modules, for example: WCDMA, GSM, CDMA and/or LTE communication modules, ZigBee modules, Bluetooth modules, Wi-Fi modules and the like.
Processors include, but are not limited to: CPU, singlechip, PLC controller, FPGA controller etc..
The memory may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) and/or cache memory; other removable/non-removable, volatile/nonvolatile computer system storage media may also be included. The memory may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
It can be understood that, according to the technical scheme provided by this embodiment, the constraint score and the intersection ratio score of the first target and the second target that belong to the same lane are calculated respectively, and the similarity score of the first target and the second target is determined.
Moreover, due to the technical scheme provided by the embodiment, consideration of the target appearance information is abandoned, so that the problem of matching failure caused by low matching registration rate or low appearance information distinction degree due to instability of the target appearance information under the influence of a plurality of external factors in the prior art is effectively solved.
EXAMPLE six
A computer-readable storage medium having stored thereon a rewritable computer program according to an exemplary embodiment is shown;
when the computer program is run on a computer device, causing the computer device to perform the method according to embodiment one; and/or performing the method of embodiment two; and/or performing the method of embodiment three.
The computer-readable storage medium disclosed by the embodiment includes but is not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
It can be understood that, according to the technical scheme provided by this embodiment, the constraint score and the intersection ratio score of the first target and the second target that belong to the same lane are calculated respectively, and the similarity score of the first target and the second target is determined.
Moreover, due to the technical scheme provided by the embodiment, consideration of the target appearance information is abandoned, so that the problem of matching failure caused by low matching registration rate or low appearance information distinction degree due to instability of the target appearance information under the influence of a plurality of external factors in the prior art is effectively solved.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
It should be noted that the terms "first," "second," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Further, in the description of the present invention, the meaning of "a plurality" means at least two unless otherwise specified.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (16)

1. A target similarity score calculation method based on a vehicle-mounted camera is characterized by comprising the following steps:
determining a first target and a second target to be matched, wherein the first target and the second target exist in different camera images;
respectively calculating constraint scores of the first target and the second target belonging to the same lane, and intersection ratio scores of the first target and the second target;
and determining the similarity score of the first target and the second target according to the constraint score and the intersection ratio score.
2. The method of claim 1, wherein determining the first and second targets to be matched comprises:
if multi-target tracking in the time domain is carried out, determining targets in different camera images at adjacent moments as a first target and a second target; and/or the presence of a gas in the gas,
and if multi-camera multi-target matching is carried out, determining the targets appearing in the overlapped visual field of the cameras at the same moment as the first target and the second target.
3. The method of claim 1, wherein said calculating a constraint score for said first and second targets belonging to a same lane comprises:
calculating a first probability that the first target belongs to any lane;
calculating a second probability that the second target belongs to the same lane;
and calculating a constraint score of the first target and the second target belonging to the same lane according to the first probability and the second probability.
4. The method of claim 3, wherein said calculating a first probability that the first target belongs to any lane comprises:
detecting a 3D frame and a lane line of a first target from a camera image according to a preset detection model;
mapping the grounding point and the lane line of the 3D frame to a vehicle coordinate system from an image coordinate system to obtain the pose of the first target under the vehicle coordinate; the poses are coordinates of at least three grounding points of the 3D frame;
and calculating the first probability according to the pose of the first target and the lane line coordinate.
5. The method of claim 4, wherein calculating a second probability that a second target belongs to the same lane comprises:
if multi-target tracking in the time domain is carried out, estimating the pose of the second target at the current moment according to the tracking result of the second target at the previous moment, and calculating the second probability according to the estimated pose; and/or the presence of a gas in the gas,
and if multi-camera multi-target matching is carried out, the second probability is the same as the first probability calculation method.
6. The method of claim 1, wherein said calculating a cross-over score for the first and second objectives comprises:
respectively determining the corner coordinates of the first target and the corner coordinates of the second target in the top view at the current moment;
calculating the area of a triangle formed by each corner point of the first target and the origin of the first vehicle coordinate system, and determining the triangle corresponding to the maximum area as a first maximum triangle;
calculating the area of the triangle formed by each corner point of the second target and the origin of the second vehicle coordinate system, and determining the triangle corresponding to the maximum area as a second maximum triangle;
and taking the intersection ratio of the first maximum triangle and the second maximum triangle as the intersection ratio score of the first target and the second target.
7. The method of claim 6, wherein separately determining corner coordinates of the first object and the second object from the top view at the current time comprises:
if multi-target tracking in the time domain is carried out, calculating the corner point coordinates of the first target under the top view at the current moment according to the pose of the first target at the current moment; estimating the corner point coordinates of the second target under the top view at the current moment according to the pose of the second target at the previous moment and the vehicle motion model; and/or the presence of a gas in the gas,
if multi-camera multi-target matching is carried out, calculating the corner point coordinates of the first target under the top view at the current moment according to the pose of the first target at the current moment; and calculating the corner point coordinates of the second target under the top view at the current moment according to the pose of the second target at the current moment.
8. The method of claim 7, further comprising:
if multi-target tracking in the time domain is carried out, determining the geometric center point of the current vehicle at the current moment as the origin of a first vehicle coordinate system; determining the geometric center point of the current vehicle at the current moment estimated according to the geometric center point coordinates of the current vehicle at the previous moment and the vehicle motion model as the origin of a second vehicle coordinate system; and/or the presence of a gas in the gas,
if multi-camera multi-target matching is carried out, the origin of the first vehicle coordinate system is the same as the origin of the second vehicle coordinate system, and the origins are the geometric center points of the current vehicle at the current moment.
9. The method of any one of claims 1 to 8, wherein determining a similarity score for the first and second objectives based on the constraint score and the cross-over score comprises:
and according to a preset weight, carrying out weighted summation on the constraint score and the intersection ratio score to obtain a similarity score of the first target and the second target.
10. A multi-target tracking method based on a vehicle-mounted camera is characterized by comprising the following steps:
the vehicle-mounted camera-based target similarity score calculation method of any one of claims 1 to 9.
11. The method of claim 10, further comprising:
taking the first target as a reference target and the second target as a target to be matched;
calculating a similarity score for the first target and all second targets;
judging whether the similarity score is larger than a threshold value, if so, storing the similarity score into a score result list, otherwise, judging that the first target is not matched with the current second target, and discarding the current similarity score;
and sorting the similarity scores in the score result list, and taking a second target corresponding to the highest similarity score as a tracking result.
12. A multi-target matching method based on a vehicle-mounted camera is characterized by comprising the following steps:
the vehicle-mounted camera-based target similarity score calculation method of any one of claims 1 to 9.
13. The method of claim 12, further comprising:
taking the first target as a reference target and the second target as a target to be matched;
calculating a similarity score for the first target and all second targets;
judging whether the similarity score is larger than a threshold value, if so, storing the similarity score into a score result list, otherwise, judging that the first target is not matched with the current second target, and discarding the current similarity score;
and sorting the similarity scores in the score result list, and taking a second target corresponding to the highest similarity score as a matching result.
14. A vehicle-mounted camera-based target similarity score calculation system, comprising:
a determining module for determining a first target and a second target to be matched, the first target and the second target being present in different camera images;
the calculation module is used for calculating the constraint scores of the first target and the second target belonging to the same lane and the intersection ratio score of the first target and the second target;
the determining module is further configured to determine a similarity score of the first target and the second target according to the constraint score and the cross-over ratio score.
15. An electronic device, comprising:
the system comprises a communication module, a processor and a memory, wherein the memory stores program instructions;
the processor is configured to execute program instructions stored in the memory to perform the method of any of claims 1-9; and/or performing the method of claim 10 or 11; and/or to perform the method of claim 12 or 13.
16. A computer-readable storage medium having stored thereon an erasable computer program;
when the computer program is run on a computer device, causing the computer device to perform the method of any one of claims 1 to 9; and/or performing the method of claim 10 or 11; and/or to perform the method of claim 12 or 13.
CN202111042603.4A 2021-09-07 2021-09-07 Target similarity score calculation method and system based on vehicle-mounted camera Active CN113792634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111042603.4A CN113792634B (en) 2021-09-07 2021-09-07 Target similarity score calculation method and system based on vehicle-mounted camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111042603.4A CN113792634B (en) 2021-09-07 2021-09-07 Target similarity score calculation method and system based on vehicle-mounted camera

Publications (2)

Publication Number Publication Date
CN113792634A true CN113792634A (en) 2021-12-14
CN113792634B CN113792634B (en) 2022-04-15

Family

ID=78879664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111042603.4A Active CN113792634B (en) 2021-09-07 2021-09-07 Target similarity score calculation method and system based on vehicle-mounted camera

Country Status (1)

Country Link
CN (1) CN113792634B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080273752A1 (en) * 2007-01-18 2008-11-06 Siemens Corporate Research, Inc. System and method for vehicle detection and tracking
US20150063628A1 (en) * 2013-09-04 2015-03-05 Xerox Corporation Robust and computationally efficient video-based object tracking in regularized motion environments
CN106056100A (en) * 2016-06-28 2016-10-26 重庆邮电大学 Vehicle auxiliary positioning method based on lane detection and object tracking
US20170178345A1 (en) * 2015-12-17 2017-06-22 Canon Kabushiki Kaisha Method, system and apparatus for matching moving targets between camera views
CN110459064A (en) * 2019-09-19 2019-11-15 上海眼控科技股份有限公司 Vehicle illegal behavioral value method, apparatus, computer equipment
CN110515073A (en) * 2019-08-19 2019-11-29 南京慧尔视智能科技有限公司 The trans-regional networking multiple target tracking recognition methods of more radars and device
CN110542436A (en) * 2019-09-11 2019-12-06 百度在线网络技术(北京)有限公司 Evaluation method, device and equipment of vehicle positioning system and storage medium
CN110745140A (en) * 2019-10-28 2020-02-04 清华大学 Vehicle lane change early warning method based on continuous image constraint pose estimation
CN111145545A (en) * 2019-12-25 2020-05-12 西安交通大学 Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning
CN111652912A (en) * 2020-06-10 2020-09-11 北京嘀嘀无限科技发展有限公司 Vehicle counting method and system, data processing equipment and intelligent shooting equipment
CN111862147A (en) * 2020-06-03 2020-10-30 江西江铃集团新能源汽车有限公司 Method for tracking multiple vehicles and multiple human targets in video
CN112733270A (en) * 2021-01-08 2021-04-30 浙江大学 System and method for predicting vehicle running track and evaluating risk degree of track deviation
CN112927303A (en) * 2021-02-22 2021-06-08 中国重汽集团济南动力有限公司 Lane line-based automatic driving vehicle-mounted camera pose estimation method and system
CN112990124A (en) * 2021-04-26 2021-06-18 湖北亿咖通科技有限公司 Vehicle tracking method and device, electronic equipment and storage medium
CN113077511A (en) * 2020-01-06 2021-07-06 初速度(苏州)科技有限公司 Multi-camera target matching and tracking method and device for automobile
CN113177968A (en) * 2021-04-27 2021-07-27 北京百度网讯科技有限公司 Target tracking method and device, electronic equipment and storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080273752A1 (en) * 2007-01-18 2008-11-06 Siemens Corporate Research, Inc. System and method for vehicle detection and tracking
US20150063628A1 (en) * 2013-09-04 2015-03-05 Xerox Corporation Robust and computationally efficient video-based object tracking in regularized motion environments
US20170178345A1 (en) * 2015-12-17 2017-06-22 Canon Kabushiki Kaisha Method, system and apparatus for matching moving targets between camera views
CN106056100A (en) * 2016-06-28 2016-10-26 重庆邮电大学 Vehicle auxiliary positioning method based on lane detection and object tracking
CN110515073A (en) * 2019-08-19 2019-11-29 南京慧尔视智能科技有限公司 The trans-regional networking multiple target tracking recognition methods of more radars and device
CN110542436A (en) * 2019-09-11 2019-12-06 百度在线网络技术(北京)有限公司 Evaluation method, device and equipment of vehicle positioning system and storage medium
CN110459064A (en) * 2019-09-19 2019-11-15 上海眼控科技股份有限公司 Vehicle illegal behavioral value method, apparatus, computer equipment
CN110745140A (en) * 2019-10-28 2020-02-04 清华大学 Vehicle lane change early warning method based on continuous image constraint pose estimation
CN111145545A (en) * 2019-12-25 2020-05-12 西安交通大学 Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning
CN113077511A (en) * 2020-01-06 2021-07-06 初速度(苏州)科技有限公司 Multi-camera target matching and tracking method and device for automobile
CN111862147A (en) * 2020-06-03 2020-10-30 江西江铃集团新能源汽车有限公司 Method for tracking multiple vehicles and multiple human targets in video
CN111652912A (en) * 2020-06-10 2020-09-11 北京嘀嘀无限科技发展有限公司 Vehicle counting method and system, data processing equipment and intelligent shooting equipment
CN112733270A (en) * 2021-01-08 2021-04-30 浙江大学 System and method for predicting vehicle running track and evaluating risk degree of track deviation
CN112927303A (en) * 2021-02-22 2021-06-08 中国重汽集团济南动力有限公司 Lane line-based automatic driving vehicle-mounted camera pose estimation method and system
CN112990124A (en) * 2021-04-26 2021-06-18 湖北亿咖通科技有限公司 Vehicle tracking method and device, electronic equipment and storage medium
CN113177968A (en) * 2021-04-27 2021-07-27 北京百度网讯科技有限公司 Target tracking method and device, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WU, QINGSONG: "Algorithms for Multiple Ground Target Tracking", 《ELECTRICAL AND COMPUTER ENGINEERING》 *
刘靖钰: "基于行车记录仪视频流的车辆检测与轨迹追踪预测", 《中国硕士学位论文全文数据库》 *
袁?川等: "基于车道线提取的智能车横向定位技术", 《军事交通学院学报》 *

Also Published As

Publication number Publication date
CN113792634B (en) 2022-04-15

Similar Documents

Publication Publication Date Title
CN112292711B (en) Associating LIDAR data and image data
JP7052663B2 (en) Object detection device, object detection method and computer program for object detection
US11461912B2 (en) Gaussian mixture models for temporal depth fusion
US11458912B2 (en) Sensor validation using semantic segmentation information
Barth et al. Estimating the driving state of oncoming vehicles from a moving platform using stereo vision
Song et al. High accuracy monocular SFM and scale correction for autonomous driving
JP4919036B2 (en) Moving object recognition device
Zhou et al. Ground-plane-based absolute scale estimation for monocular visual odometry
Erbs et al. Moving vehicle detection by optimal segmentation of the dynamic stixel world
JP2020052695A (en) Object detection apparatus, object detection method, and computer program for object detection
CN105654031B (en) System and method for object detection
CN113261010A (en) Object trajectory-based multi-modal sensor fusion method for cross-domain correspondence
Mueller et al. Continuous stereo camera calibration in urban scenarios
KR101030317B1 (en) Apparatus for tracking obstacle using stereo vision and method thereof
Wang et al. Autonomous landing of multi-rotors UAV with monocular gimbaled camera on moving vehicle
Hayakawa et al. Ego-motion and surrounding vehicle state estimation using a monocular camera
CN113029185A (en) Road marking change detection method and system in crowdsourcing type high-precision map updating
Omar et al. Detection and localization of traffic lights using yolov3 and stereo vision
CN113792634B (en) Target similarity score calculation method and system based on vehicle-mounted camera
CN116385997A (en) Vehicle-mounted obstacle accurate sensing method, system and storage medium
Catalin et al. Object tracking from stereo sequences using particle filter
Mueller et al. Continuous stereo self-calibration on planar roads
Song et al. 3D vehicle model-based PTZ camera auto-calibration for smart global village
Contreras et al. A Stereo Visual Odometry Framework with Augmented Perception for Dynamic Urban Environments
CN117593650B (en) Moving point filtering vision SLAM method based on 4D millimeter wave radar and SAM image segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant