CN107240124B - Cross-lens multi-target tracking method and device based on space-time constraint - Google Patents

Cross-lens multi-target tracking method and device based on space-time constraint Download PDF

Info

Publication number
CN107240124B
CN107240124B CN201710358354.7A CN201710358354A CN107240124B CN 107240124 B CN107240124 B CN 107240124B CN 201710358354 A CN201710358354 A CN 201710358354A CN 107240124 B CN107240124 B CN 107240124B
Authority
CN
China
Prior art keywords
tracking
camera
target
points
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710358354.7A
Other languages
Chinese (zh)
Other versions
CN107240124A (en
Inventor
鲁继文
周杰
任亮亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201710358354.7A priority Critical patent/CN107240124B/en
Publication of CN107240124A publication Critical patent/CN107240124A/en
Priority to PCT/CN2017/115672 priority patent/WO2018209934A1/en
Application granted granted Critical
Publication of CN107240124B publication Critical patent/CN107240124B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Abstract

The invention discloses a cross-lens multi-target tracking method and a device based on space-time constraint, wherein the method comprises the following steps: preprocessing images in different color spaces to make the color temperature and the tone of the images consistent so as to acquire the camera information of a plurality of camera devices; establishing a corresponding relation of 2D points through a projection matrix of the camera equipment to acquire geometric information among the plurality of camera equipment, wherein the projection matrix is a projection matrix about a 3D world; and carrying out human body feature matching among the multiple cameras according to the camera information and the geometric information so as to obtain each camera equipment picture and a real-time tracking result by utilizing the appearance and the space-time features of the tracking target. The method combines the current multi-target tracking algorithm and the multi-camera processing method, and utilizes the network position and posture relation matrix of the camera equipment, thereby realizing the multi-camera-based multi-target object tracking purpose, improving the robustness of object tracking, reducing tracking errors and improving the tracking accuracy.

Description

Cross-lens multi-target tracking method and device based on space-time constraint
Technical Field
The invention relates to the technical field of visual target tracking in computer image processing, in particular to a cross-lens multi-target tracking method and device based on space-time constraint.
Background
Video object tracking refers to the initial position of a given object in a video and then outputting the position of the object at each time in the video. Object tracking is an important problem in computer vision and is usually the first step in the video analysis process. Therefore, a large number of students are engaged in the research of object tracking, and numerous effective object tracking algorithms are proposed. In some surveillance scenarios, it is desirable to track multiple objects simultaneously in a complex scene. Mutual occlusion between multiple objects increases the difficulty of object tracking, which often occurs in pedestrian tracking. When a large group of people appears in the image pickup apparatus picture at the same time, each person overlaps with each other so that the actual position thereof cannot be accurately acquired. The current multi-target tracking methods are mainly divided into two types: multi-target tracking method based on single camera and multi-target tracking method based on multi-camera
The method comprises the steps of generating tracking small segments, namely trajectory segments formed by traditional colony detection results, and connecting the tracking small segments through a Hungarian partitioning algorithm.
The method based on multiple cameras mainly focuses on how to perform data fusion of multiple cameras, and mainly includes a method based on camera calibration and a method of feature matching. The method based on the calibration of the camera shooting equipment mainly utilizes a projection matrix of the camera shooting equipment to project different pictures of the camera shooting equipment onto the same picture. For the method based on feature matching, the matching result is improved mainly by searching efficient apparent features and space-time information. The tracking problem of multiple cameras is more challenging than the tracking problem of cameras due to the large illumination and view angle difference between different lenses.
However, one effective approach to the problem of tracking multiple objects in complex scenes is to utilize a multi-camera monitoring system. In the monitoring area where the plurality of camera devices are overlapped, the position of the object can be acquired more accurately by means of the information of the plurality of camera devices. As the price of sensors and processors has decreased, the use of multiple cameras in conjunction with many scenes has become more common. The multi-camera real-time tracking problem mainly has two parts: tracking inside the camera and cross-camera tracking. Among them, the processing method of overlapping coverage area and uncovered area in the problem of cross-camera tracking has discussed in many articles that the multi-camera based multi-target tracking is meaningful with the requirements of security and pedestrian data analysis, but at the same time, the work is also very challenging due to the complexity of the problem. Recently, many researchers have proposed to use information of multiple cameras to improve the robustness of object tracking, but they neglect geometric constraints and the like, violate geometric assumptions, and need more complicated methods to solve the errors caused by the geometric constraints.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, one objective of the present invention is to provide a cross-lens multi-target tracking method based on space-time constraint, which can reduce tracking error and improve tracking accuracy while improving robustness of object tracking.
The invention also aims to provide a cross-lens multi-target tracking device based on space-time constraint.
In order to achieve the above object, an embodiment of the present invention provides a cross-lens multi-target tracking method based on space-time constraint, which includes the following steps: preprocessing images in different color spaces to make the color temperature and the tone of the images consistent so as to acquire the camera information of a plurality of camera devices; establishing a corresponding relation of 2D points through a projection matrix of the camera equipment to acquire geometric information among the plurality of camera equipment, wherein the projection matrix is a projection matrix about a 3D world; and matching human body characteristics among the cameras according to the image pickup information and the geometric information so as to acquire each image pickup device picture and a real-time tracking result by utilizing the appearance and the space-time characteristics of a tracking target.
According to the cross-lens multi-target tracking method based on space-time constraint, the human body characteristics among the multiple cameras are matched through the camera information and the geometric information, the tracking of the target is realized, the current multi-target tracking algorithm and the multi-camera processing method are effectively combined, and the network position and posture relation matrix of the camera equipment is utilized, so that the multi-camera-based multi-target object tracking purpose is realized, the robustness of object tracking is improved, the tracking error is reduced, and the tracking accuracy is improved.
In addition, the cross-lens multi-target tracking method based on the space-time constraint according to the above embodiment of the invention may further have the following additional technical features:
further, in an embodiment of the present invention, the performing human body feature matching between a plurality of cameras according to the camera information and the geometric information further includes: when any one of the plurality of camera devices detects a tracking target, projecting the position of the tracking target to a coordinate system corresponding to the ground through the projection matrix; and performing cluster analysis on all the points to acquire the same tracking target in other camera devices in the plurality of camera devices.
Further, in an embodiment of the present invention, the acquiring a same tracking target in other image capturing apparatuses in the plurality of image capturing apparatuses further includes: acquiring an optimal group in all results, wherein the optimal group is the group with the largest number of camera devices and the smallest phase position error; and determining the 3D coordinates of the tracking target through the optimal group, removing points with deviation larger than a first preset value in the group according to the 3D coordinates of the tracking target, selecting points with deviation smaller than a second preset value in the rest points, and removing the sets until all the points select the sets.
Further, in one embodiment of the present invention, a Hough voting method is employed, and the position of a pedestrian is determined from the positions of a plurality of image pickup apparatuses of a human body and pose information of the image pickup apparatuses.
Further, in an embodiment of the present invention, in the tracking, the method further includes: and matching the tracking result with a pedestrian model to eliminate the problems of mismatching, shielding and missing detection, wherein the pedestrian model comprises one or more parameters of speed, current position, color characteristics, first occurrence time, track and current state.
In order to achieve the above object, an embodiment of the present invention provides a cross-border multi-target tracking device based on space-time constraint, including: the image preprocessing module is used for preprocessing images in different color spaces to enable the color temperature and the tone of the images to be consistent so as to acquire the image pickup information of a plurality of image pickup devices; the acquisition module is used for establishing a corresponding relation of 2D points through a projection matrix of the camera equipment so as to acquire geometric information among the plurality of camera equipment, wherein the projection matrix is a projection matrix about a 3D world; and the tracking module is used for matching human body characteristics among the cameras according to the camera information and the geometric information so as to acquire each camera equipment picture and a real-time tracking result by utilizing the appearance and the space-time characteristics of a tracking target.
According to the cross-lens multi-target tracking device based on space-time constraint, the human body characteristics among the multiple cameras are matched through the camera information and the geometric information, the tracking of the target is realized, the current multi-target tracking algorithm and the multi-camera processing method are effectively combined, and the network position and posture relation matrix of the camera equipment is utilized, so that the multi-camera-based multi-target object tracking purpose is realized, the robustness of object tracking is improved, the tracking error is reduced, and the tracking accuracy is improved.
In addition, the cross-lens multi-target tracking device based on the space-time constraint according to the above embodiment of the invention may further have the following additional technical features:
further, in an embodiment of the present invention, the tracking module is further configured to, when any one of the plurality of image capturing apparatuses detects a tracking target, project a position of the tracking target to a coordinate system corresponding to the ground through the projection matrix, and perform cluster analysis on all the points to obtain the same tracking target in other image capturing apparatuses in the plurality of image capturing apparatuses.
Further, in an embodiment of the present invention, the tracking module is further configured to obtain an optimal group of all the results, where the optimal group is the group with the largest number of image capturing apparatuses and the smallest phase position error, and determine the 3D coordinates of the tracking target through the optimal group, so as to remove, according to the 3D coordinates of the tracking target, the points in the group with the deviation larger than a first preset value, and select the points with the deviation smaller than a second preset value among the remaining points, and remove the set until all the points select the set.
Further, in an embodiment of the present invention, the method further includes: and the positioning module is used for determining the position of the pedestrian according to the positions of the plurality of camera devices of the human body and the pose information of the camera devices by adopting a Hough voting method.
Further, in an embodiment of the present invention, the method further includes: and the matching module is used for matching the tracking result with a pedestrian model to eliminate the problems of mismatching, shielding and missing detection, wherein the pedestrian model comprises one or more parameters of speed, current position, color characteristics, first occurrence time, track and current state.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of a cross-lens multi-target tracking method based on space-time constraints according to an embodiment of the invention;
FIG. 2 is a flowchart of a cross-lens multi-target tracking method based on space-time constraints, according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a detection result at a certain time according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the positioning and clustering results according to one embodiment of the present invention;
FIG. 5 is a schematic diagram of the detection results of a camera according to one embodiment of the invention;
FIG. 6 is a diagram illustrating positioning results according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating actual tracking results according to one embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a cross-lens multi-target tracking device based on space-time constraints according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The cross-lens multi-target tracking method and device based on space-time constraint proposed by the embodiment of the invention are described below with reference to the accompanying drawings, and firstly, the cross-lens multi-target tracking method based on space-time constraint proposed by the embodiment of the invention will be described with reference to the accompanying drawings.
FIG. 1 is a flowchart of a cross-lens multi-target tracking method based on space-time constraints according to an embodiment of the present invention.
As shown in fig. 1, the cross-lens multi-target tracking method based on space-time constraint includes the following steps:
in step S101, image preprocessing is performed on different color spaces to make pictures uniform in color temperature and hue to acquire image sensing information of a plurality of image sensing apparatuses.
In the original multiple camera pictures, the same object has different colors in different camera pictures due to the influence of camera orientation, illumination and device differences, and the embodiment of the invention performs image preprocessing in different color spaces by using the color statistical information of pedestrians as the important characteristic when performing target tracking later.
For example, although the four cameras display pictures on the same ground at the same time, the four pictures have large differences in color temperature and color tone, which affects the subsequent human body feature matching among multiple cameras. Therefore, the embodiment of the invention adopts a simple and effective algorithm, the same mean value is carried out in the lab color space, and the result after the same variance processing is the best, because the coupling degree of three channels in the lab color space is the minimum, and the processed image has no noise and no serious color distortion.
Wherein, firstly, fixing
Figure BDA0001299698200000051
mt,α,βAs the target mean and variance of each channel, while for recording the mean and variance of each camera background picture (the first frame, or the background obtained by the background construction algorithm),
Figure BDA0001299698200000052
mi,α,β. Then, each frame is subjected to the same-mean and same-variance normalization processing by using the following formula, so that the normalization processing can be prevented from being influenced by the occurrence of pedestrians in the video:
Figure BDA0001299698200000053
in step S102, the correspondence of 2D points is established by a projection matrix of the image pickup apparatus to acquire geometric information between the plurality of image pickup apparatuses, wherein the projection matrix is a projection matrix regarding a 3D world.
In step S103, human body feature matching between the plurality of cameras is performed according to the imaging information and the geometric information to acquire each imaging apparatus picture and a real-time tracking result using an apparent and spatiotemporal feature of the tracking target.
In an embodiment of the present invention, the human body feature matching between the plurality of cameras according to the camera information and the geometric information further includes: when any one of the plurality of camera devices detects a tracking target, projecting the position of the tracking target to a coordinate system corresponding to the ground through a projection matrix; and performing cluster analysis on all the points to acquire the same tracking target in other camera equipment in the plurality of camera equipment.
Further, in an embodiment of the present invention, acquiring the same tracking target in other image capturing apparatuses in the plurality of image capturing apparatuses further includes: acquiring an optimal group in all results, wherein the optimal group is the group with the largest number of camera devices and the smallest phase position error; and determining the 3D coordinates of the tracking target through the optimal group, removing the points with the deviation larger than a first preset value in the group according to the 3D coordinates of the tracking target, selecting the points with the deviation smaller than a second preset value in the rest points, and removing the set until all the points are selected out of the set.
Specifically, based on multi-target tracking of multiple cameras, in the embodiment of the invention, after comprehensively comparing multiple object detection algorithms, object detection is performed by using fast-R-CNN, and then the 2D point in an image and the 3D point in the world have the following corresponding relationship, H is called as a projection matrix of the camera:
Figure BDA0001299698200000061
the two cameras can establish a relationship through their projection matrixes about the 3D world, that is, the corresponding relationship of 2D points is established:
Figure BDA0001299698200000062
in an embodiment of the present invention, the earth may be treated as one large camera, and then the projection matrix of all cameras with respect to the earth may be solved. Known cameras i toProjection matrix H of the earthi→gAny point (x) in the camera ii,yi) Then it is at the corresponding coordinate of earth
Figure BDA0001299698200000063
This can be deduced from the following formula:
Figure BDA0001299698200000064
n is detected in the ith camera viewiA person, whose location is
Figure BDA0001299698200000065
It is projected into the coordinate system corresponding to the ground through the corresponding projection matrix,
Figure BDA0001299698200000066
all points then need to be clustered, i.e. to find the same person in different cameras. For this, an optimization problem needs to be solved as follows:
Figure BDA0001299698200000067
wherein the content of the first and second substances,
Figure BDA0001299698200000068
is the total number of people detected in all cameras in the k-th frame.
Figure BDA0001299698200000069
Representing the degree of similarity between the i-th and j-th, including two factors, the degree of similarity between human features in the first place
Figure BDA00012996982000000610
Φ (i, K) is the ith personal color feature, and then the covariance coefficient is calculated by K (a, b). The second is the degree of positional similarity
Figure BDA00012996982000000611
Figure BDA00012996982000000612
Is an indicative function, if e is true, then
Figure BDA00012996982000000613
Otherwise, the reverse is carried out
Figure BDA00012996982000000614
Is a distance control coefficient.
Figure BDA00012996982000000615
Indicating the relationship between the i-th and j-th detection targets if
Figure BDA00012996982000000616
Then the two are the same person if
Figure BDA00012996982000000617
The two are not the same person, taking into account that the two persons detected inside the camera are unlikely to be the same person, and that each object appears in the camera view with at most one match in the other camera view. The triangle inequality in the last column indicates that if l and i, l and j are the same person, i, j are also the same person, i.e. the loop constraint. The optimization problem is an integer type optimization problem, a global optimal solution cannot be accurately solved in a practical system, and in a practical algorithm, the embodiment of the invention designs a method for approximating the optimal solution, wherein the method comprises the following steps:
(1) the best group of all results is found first (the number of cameras is the largest and the relative position error is small). Specifically, the candidate set is first clustered using the position and color information. Then, the clustering center characteristic information is used for screening, then, the optimal position is calculated by using the residual reliable elements, and a specific calculation algorithm is given in the following section.
(2) The 3D coordinates of the person are determined using the results in the group, and then the points of the group selected with greater deviation are removed from the coordinates and the points with lesser deviation are selected among the remaining points, removing the collection. Specifically, the position and color feature of the person are obtained by using the above calculation results, then the elements which are possible to be the person but are not gathered to the class due to the previous clustering algorithm are searched in the remaining set, and the alternative set is removed. Elements in the class that are not the person are then removed using color characteristics and position and placed back in the candidate set.
(3) Repeating the operations (1) and (2) until all the points are selected out of the set.
Further, in one embodiment of the present invention, a Hough voting method is employed, and the position of a pedestrian is determined from the positions of a plurality of image pickup apparatuses of a human body and pose information of the image pickup apparatuses.
In particular, for the Hough voting method implementation, the intersection point of line segments projected by the human body in the direction of the earth in the two cameras is more likely to be the position of a real pedestrian in the earth, according to the thought of Hough voting, the positions of a plurality of cameras of the human body and the position and posture information of the cameras are comprehensively considered to determine the position of the pedestrian, all camera pictures are assumed to be horizontal, namely the numerical value of the x coordinate of each head part and each foot part in the camera pictures is the same, and the foothold points (x, y) and (x, y + ∈) in the camera pictures are projected to the ground according to the formula to obtain (x'1,y′1) And (x'2,y′2). Then
Figure BDA0001299698200000071
In the post-projection direction, and
Figure BDA0001299698200000072
this is the change in scale when the camera (x, y) is projected onto the ground, which will be used later when the visual tracking results are small.
Figure BDA0001299698200000073
Figure BDA0001299698200000074
Figure BDA0001299698200000075
From the above calculation, the mapping matrix between camera i and the ground plane is Hi→gAny point (x, y) has coordinates (x ', y') on the ground plane, and the projection direction is
Figure BDA0001299698200000076
Let ∈ → 0 then to give ω'2→ω′1ω' with a projection direction of
Figure BDA0001299698200000081
As shown in fig. 3, in the output of actual human detection, the estimation of the foothold tends to be with some error. A rectangular box of the detection results of the second person to the right of the third camera.
From fig. 4, circles represent Hough votes, stars represent conventional method results, cam1 is represented by solid line No. 4, cam2 is represented by solid line No. 3, cam3 is represented by solid line No. 2, and cam4 is represented by solid line No. 1, where the center of each line is the position where the foothold of the person is projected on the ground under the camera. It can be seen that the results obtained using the Hough voting method generally appear at the convergence of the projection directions of the multiple cameras. For example, a person at the lower left corner in the geodetic coordinates is detected in cam1, cam2, and cam3, where the detected position in cam1 and cam2 is accurate, and the foothold deviation detected by cam3 is large, but the direction of the human body detected in each camera is accurate, that is, the left and right positions of the rectangular frame are reliable. It can be noted that the three camera center points do not coincide and have very different positions on the ground plane, but the three straight lines almost intersect at one point, which shows that the position confidence determined by Hough voting is greatly improved. In order to deal with the possible case of inaccurate left-right positioning of the second left rectangle in cam2, the randsec idea is utilized in the algorithm, that is, not all data are all combined together optimally, but an optimal data combination can be found, the result has the highest reliability and the variance is the least. On a plane, two straight lines can determine a point, so in the algorithm, 2 cameras are randomly selected from a set, then the corresponding positions are solved, then the global loss function at the position is calculated, and then the position with the minimum loss function is selected from a plurality of combinations, so that the influence of positioning information with larger individual errors can be eliminated. This can improve the accuracy of positioning when the number of cameras is limited (usually less than or equal to 4).
The situation in which there are 7 persons in the camera coverage area is shown in fig. 5. Three of the people are seen by four cameras simultaneously, two people to the right in the middle of the earth, and a man to the right of cam3 at the far left of cam 1. The remaining two people in the middle are seen by three cameras simultaneously. The rest, except for the top, which was only detected in cam4, appeared in both camera views. The results obtained by the Hough voting method are very accurate as can be seen in the positioning results of fig. 6, which can be seen by the relative positions of each person and the convergence of the projection lines. The projected straight lines of each camera of the remaining pedestrians almost intersect at a point, except for the uppermost person detected by cam4 only. Note that there is a large error in the position of two of the detection boxes, the first being the second left smaller rectangular box in cam4, the recognition result is biased because the step is blocked, and since the person is further away from cam4, the error is magnified by the resolution at the time of projection, and is seen on the ground plane (the red line in the upper left corner, the center being the result of projection onto the ground based on the cam4 foothold estimate), which differs from the true result by more than 100 pixel values, but it is noted that the error in its direction is very small, the extension of which almost passes the position determined with the other three cameras. The rightmost rectangle in cam2, foothold identification, also carries some error. Moreover, since the distance cam2 is far, the error is magnified by resolution, so that in the actual ground coordinates, the estimation error of the camera foothold is more than 50 pixel values, but the error of the projection direction is small, and the positioning result of the latest pedestrian can be seen to realize accurate positioning by simultaneously using the information of two camera pictures.
Mathematical description:
Figure BDA0001299698200000082
the pedestrian present in all adjacent frames. Phi (i, K) is the color characteristic of the ith person in the kth frame, K is the correlation function,
Figure BDA0001299698200000091
respectively, as a function of the position and velocity. Theta1,θ2,θ3False matches are eliminated for threshold parameters, i.e., handling situations where pedestrians disappear and appear.
Figure BDA0001299698200000092
Wherein the content of the first and second substances,
Figure BDA0001299698200000093
a adjacency matrix representing the relationship between the current frame and the previous frame, if
Figure BDA0001299698200000094
Then the two pedestrians are the same person if
Figure BDA0001299698200000095
The two are not the same person. Note that the last constraint can be expressed as a matrix FkAt most one element per column in each row in (1).
The above problem can be transformed into an optimization problem of minimum cost flow, and the global optimal solution can be obtained by using a minimum cost flow solving algorithm. However, it is noted that a practical tracking problem is the requirement of real-time and causality, i.e. the prediction of the current frame can only take into account the previous frame and not be affected by the following effects.
(1) And firstly finding the matching with the highest confidence coefficient, namely the point with the least occlusion and sparse crowd density. Specifically, matching is performed between all pedestrians detected currently and the pedestrian in the previous frame, and a group with the highest matching score is found.
(2) It is removed from set E.
(3) The above operation is repeated in the remaining sets.
(4) If the confidence coefficient of all the current set is lower than a given threshold value, judging that the residual points have no correlation relation, and judging that the pedestrian disappears from the picture before or a new person appears in the current frame.
The method can obtain a feasible solution in a fixed linear time, and only uses the information of the current frame and the previous frame.
Further, in an embodiment of the present invention, in the tracking, the method further includes: and matching the tracking result with a pedestrian model to eliminate the problems of mismatching, shielding and missing detection, wherein the pedestrian model comprises one or more parameters of speed, current position, color characteristics, first occurrence time, track and current state.
It can be understood that, since the relationship between the interval frames is simply considered in each step in the tracking, the probability of the occurrence of the false matching is high, and in the actual video, due to the influence of occlusion and false detection and missing detection, problems such as the loss of tracking may occur. Based on the above problems, the embodiment of the invention provides a pedestrian model, which makes full use of the previous tracking result, eliminates mismatching, and allows the target to disappear in a short time to solve the problems of occlusion and missing detection.
For example, each pedestrian model includes the following parameters:
(1) speed: v. of
(2) Current position: (x, y)
(3) Color characteristics: hist statistical characteristics
(4) Time of first occurrence: t isappear
(5) Trajectory (historical coordinates):
Figure BDA0001299698200000101
(6) currently: state
Then, when multi-target tracking is carried out, the current frame and the constructed pedestrian model are matched based on the formula. And updating each pedestrian model after the final matching result is obtained. Specifically, the updating is divided into two cases, namely, a matching corresponding to the current frame is found in the current frame, the changed person is considered to be detected in the frame, and if no matching meeting the requirement is found, the changed person is considered to be lost in the frame.
If the following information updating is detected to be carried out:
(1) velocity upsilon α + 1- αnew,υnew=(xnew-ynew)(x,y)
(2) Position: (x, y) + upsilon
(3) Color characteristics of hist β hist + (1- β) histnew
(4) The current state is as follows: state 1
(5) Track updating: (x)t,yy)=(x,y)
α is an exponential smoothing term to smooth the speed of the pedestrian and reduce the influence of noise on the tracking result in each frame estimation error, and the pedestrian position is not updated directly by using the position of the current frame, but the speed is updated first and then the position is updated by the speed, so that the previous speed information can be used, and the problem caused by mismatching of a certain frame is reduced due to the maximum speed limit, but a certain hysteresis exists, namely if the speed of an object is greatly changed, the model needs a longer time to be corrected, but if the similar situation rarely occurs in the actual tracking problem, the strategy is adopted to be greater than the maximum speed limit.
If the information updating is not detected:
(1) speed: upsilon is gamma and upsilon is not less than 0 and not more than 1
(2) Position: (x, y) + upsilon
(3) Color characteristics: is not changed
(4) Track updating: (x)t,yy)=(x,y)
(5) The current state is as follows: state-1
There are two cases where a pedestrian is not detected, one is that the pedestrian disappears from the camera view, and the other is that the pedestrian is not detected due to occlusion or false detection and false matching. Only the pedestrian needs to be deleted for the former case. For the second case, it is necessary to retain all the information of the pedestrian and to prepare it as much as possible for the next detection match. In practice, there is a speed attenuation term γ first, and the pedestrian can continue to advance according to the original speed after being lost, so that the next frame can appear at a suitable position when matching is performed, correct matching is easily obtained, and in addition, the speed needs to be attenuated, which has the advantage of increasing the stability of the system, in an experiment, γ is not easy to select and is too large, so that the pedestrian is easy to correct due to no real information after being lost, the moving speed is too fast, which not only causes the pedestrian to be difficult to detect again, but also affects the matching of other people, but is also not easy to be too small, and the model is stopped in place soon after being lost if being too small, which also brings the above-mentioned problem, and γ is generally 0.9 in the actual process. The location information is then updated with the velocity. The current position is likewise added to the trajectory. And finally, a very important state adjustment link, wherein the state can reflect the number of the lost frames of the pedestrian through the adjustment, and if a pedestrian is not activated for a long period of time, the algorithm can consider that the pedestrian disappears from the monitoring area forever, and can remove the pedestrian from the list.
Finally, each camera view is displayed together with the real-time tracking results, as shown in fig. 7.
According to the cross-lens multi-target tracking method based on space-time constraint provided by the embodiment of the invention, the information of a plurality of cameras is combined, the geometric information among the cameras and the appearance and space-time characteristics of a target are considered at the same time to realize more effective data fusion, the hough voting is utilized to determine the 3D position of a pedestrian, the prior of the camera is utilized to eliminate the influence of inaccurate estimation based on the foothold point in the traditional method, the 3D position of the pedestrian is directly tracked to realize more effective human analysis, a pedestrian model is introduced, the tracking result of a plurality of frames is comprehensively considered, the spatial position and the walking track of the pedestrian are considered, more robust multi-target tracking is realized, wherein the human characteristic matching among the plurality of cameras is carried out through the camera information and the geometric information to realize the tracking of the target, the current multi-target tracking algorithm and the multi-camera processing method are effectively combined, and the network pose, therefore, the purpose of multi-target object tracking based on multiple cameras is achieved, the robustness of object tracking is improved, meanwhile, the tracking error is reduced, and the tracking accuracy is improved.
Next, a cross-lens multi-target tracking device based on space-time constraint proposed by an embodiment of the invention is described with reference to the accompanying drawings.
FIG. 8 is a schematic structural diagram of a cross-lens multi-target tracking device based on space-time constraints according to an embodiment of the present invention.
As shown in fig. 8, the cross-lens multi-target tracking apparatus 10 based on space-time constraint includes: a pre-processing module 100, an acquisition module 200, and a tracking module 300.
The preprocessing module 100 is configured to perform image preprocessing on different color spaces to make pictures consistent in color temperature and color tone, so as to obtain image pickup information of a plurality of image pickup apparatuses. The acquisition module 200 is configured to establish a correspondence relationship between 2D points through a projection matrix of the image capturing apparatus to acquire geometric information between a plurality of image capturing apparatuses, where the projection matrix is a projection matrix about a 3D world. The tracking module 300 is configured to perform human body feature matching between the multiple cameras according to the camera information and the geometric information, so as to obtain each camera device picture and a real-time tracking result by using the appearance and the spatiotemporal features of the tracking target. The device 10 of the embodiment of the invention realizes the multi-target object tracking based on multiple cameras by combining the current multi-target tracking algorithm and the multi-camera processing method and utilizing the network position and posture relation matrix of the camera equipment, thereby improving the robustness of object tracking, reducing tracking errors and improving the tracking accuracy.
Further, in an embodiment of the present invention, the tracking module 300 is further configured to, when any one of the plurality of image capturing apparatuses detects a tracking target, project the position of the tracking target into a coordinate system corresponding to the ground through the projection matrix, and perform cluster analysis on all the points to obtain the same tracking target in the other image capturing apparatuses in the plurality of image capturing apparatuses.
Further, in an embodiment of the present invention, the tracking module 300 is further configured to obtain an optimal set of all the results, the optimal set being a set with the largest number of image capturing apparatuses and the smallest phase position error, and determine the 3D coordinates of the tracking target through the optimal set, to remove, according to the 3D coordinates of the tracking target, points in the set whose deviation is larger than a first preset value, and to select, among the remaining points, points whose deviation is smaller than a second preset value, and to remove the set until all the points select the set.
Further, in one embodiment of the present invention, the apparatus 10 of the embodiment of the present invention further comprises: and a positioning module. The positioning module is used for determining the position of the pedestrian according to the positions of a plurality of image pickup devices of the human body and the pose information of the image pickup devices by adopting a Hough voting method.
Further, in one embodiment of the present invention, the apparatus 10 of the embodiment of the present invention further comprises: and a matching module. The matching module is used for matching a tracking result with a pedestrian model to eliminate the problems of mismatching, shielding and missing detection, wherein the pedestrian model comprises one or more parameters of speed, current position, color characteristics, first occurrence time, track and current state.
It should be noted that the explanation of the embodiment of the cross-lens multi-target tracking method based on the space-time constraint is also applicable to the cross-lens multi-target tracking device based on the space-time constraint of the embodiment, and details are not repeated here.
According to the cross-lens multi-target tracking device based on space-time constraint, which is provided by the embodiment of the invention, the information of a plurality of cameras is combined, the geometric information among the cameras and the appearance and space-time characteristics of a target are considered at the same time to realize more effective data fusion, hough voting is utilized to determine the 3D position of a pedestrian, the prior of the camera is utilized to eliminate the influence of inaccurate estimation based on the foothold point in the traditional method, the 3D position of the pedestrian is directly tracked to realize more effective human analysis, a pedestrian model is introduced, the tracking result of a plurality of frames is comprehensively considered, the spatial position and walking track of the pedestrian are considered, more robust multi-target tracking is realized, wherein the human characteristic matching among the cameras is carried out through the camera information and the geometric information to realize the tracking of the target, the current multi-target tracking algorithm and the multi-camera processing method are effectively combined, and a network pose relation, therefore, the purpose of multi-target object tracking based on multiple cameras is achieved, the robustness of object tracking is improved, meanwhile, the tracking error is reduced, and the tracking accuracy is improved.
In the description of the present invention, it is to be understood that the terms "central," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," "axial," "radial," "circumferential," and the like are used in the orientations and positional relationships indicated in the drawings for convenience in describing the invention and to simplify the description, and are not intended to indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and are therefore not to be considered limiting of the invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through an intermediate. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (6)

1. A cross-lens multi-target tracking method based on space-time constraint is characterized by comprising the following steps:
preprocessing images in different color spaces to make the color temperature and the tone of the images consistent so as to acquire the camera information of a plurality of camera devices;
establishing a corresponding relation of 2D points through a projection matrix of the camera equipment to acquire geometric information among the plurality of camera equipment, wherein the projection matrix is a projection matrix about a 3D world;
carrying out human body feature matching among a plurality of cameras according to the image pickup information and the geometric information so as to obtain each image pickup device picture and a real-time tracking result by utilizing the appearance and the space-time features of a tracking target, wherein a Hough voting method is adopted to determine the positions of pedestrians according to the positions of a plurality of image pickup devices of a human body and the pose information of the image pickup devices so as to obtain the same tracking target in other image pickup devices in the plurality of image pickup devices;
and matching the tracking result with a pedestrian model to eliminate the problems of mismatching, shielding and missing detection, wherein the pedestrian model comprises one or more parameters of speed, current position, color characteristics, first occurrence time, track and current state.
2. The space-time constraint-based cross-lens multi-target tracking method according to claim 1, wherein the human body feature matching between a plurality of cameras is performed according to the camera information and the geometric information, and further comprising:
when any one of the plurality of camera devices detects a tracking target, projecting the position of the tracking target to a coordinate system corresponding to the ground through the projection matrix;
and performing cluster analysis on all the points to acquire the same tracking target in other camera devices in the plurality of camera devices.
3. The space-time constraint-based multi-target tracking method across lenses according to claim 2, wherein the obtaining of the same tracking target in other image capturing apparatuses of the plurality of image capturing apparatuses further comprises:
acquiring an optimal group in all results, wherein the optimal group is the group with the largest number of camera devices and the smallest phase position error;
and determining the 3D coordinates of the tracking target through the optimal group, removing points with deviation larger than a first preset value in the group according to the 3D coordinates of the tracking target, selecting points with deviation smaller than a second preset value in the rest points, and removing the sets until all the points select the sets.
4. A cross-lens multi-target tracking device based on space-time constraint is characterized by comprising the following components:
the image preprocessing module is used for preprocessing images in different color spaces to enable the color temperature and the tone of the images to be consistent so as to acquire the image pickup information of a plurality of image pickup devices;
the acquisition module is used for establishing a corresponding relation of 2D points through a projection matrix of the camera equipment so as to acquire geometric information among the plurality of camera equipment, wherein the projection matrix is a projection matrix about a 3D world;
the tracking module is used for matching human body characteristics among the cameras according to the camera information and the geometric information so as to obtain each camera equipment picture and a real-time tracking result by utilizing the appearance and the space-time characteristics of a tracking target; the positioning module is used for determining the position of a pedestrian according to the positions of a plurality of camera devices of a human body and the pose information of the camera devices by adopting a Hough voting method;
and the matching module is used for matching the tracking result with a pedestrian model to eliminate the problems of mismatching, shielding and missing detection, wherein the pedestrian model comprises one or more parameters of speed, current position, color characteristics, first occurrence time, track and current state.
5. The cross-lens multi-target tracking device based on space-time constraint according to claim 4, wherein the tracking module is further configured to, when any one of the plurality of image capturing apparatuses detects a tracking target, project the position of the tracking target to a coordinate system corresponding to the ground through the projection matrix, and perform cluster analysis on all points to obtain the same tracking target in other image capturing apparatuses of the plurality of image capturing apparatuses.
6. The spatio-temporal constraint-based cross-lens multi-target tracking device according to claim 5, wherein the tracking module is further configured to obtain an optimal group of all the results, the optimal group is the group with the largest number of image capturing devices and the smallest phase position error, and determine the 3D coordinates of the tracking target through the optimal group, so as to remove points with a deviation larger than a first preset value from the group according to the 3D coordinates of the tracking target, and select points with a deviation smaller than a second preset value from the remaining points, and remove the set until all the points select the set.
CN201710358354.7A 2017-05-19 2017-05-19 Cross-lens multi-target tracking method and device based on space-time constraint Active CN107240124B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710358354.7A CN107240124B (en) 2017-05-19 2017-05-19 Cross-lens multi-target tracking method and device based on space-time constraint
PCT/CN2017/115672 WO2018209934A1 (en) 2017-05-19 2017-12-12 Cross-lens multi-target tracking method and apparatus based on space-time constraints

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710358354.7A CN107240124B (en) 2017-05-19 2017-05-19 Cross-lens multi-target tracking method and device based on space-time constraint

Publications (2)

Publication Number Publication Date
CN107240124A CN107240124A (en) 2017-10-10
CN107240124B true CN107240124B (en) 2020-07-17

Family

ID=59985144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710358354.7A Active CN107240124B (en) 2017-05-19 2017-05-19 Cross-lens multi-target tracking method and device based on space-time constraint

Country Status (2)

Country Link
CN (1) CN107240124B (en)
WO (1) WO2018209934A1 (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107240124B (en) * 2017-05-19 2020-07-17 清华大学 Cross-lens multi-target tracking method and device based on space-time constraint
CN108921881A (en) * 2018-06-28 2018-11-30 重庆邮电大学 A kind of across camera method for tracking target based on homography constraint
CN108876823B (en) * 2018-07-02 2022-05-17 晋建志 Monocular cross-camera multi-target recognition, positioning and tracking device and method based on space-time continuity
CN110969644B (en) * 2018-09-28 2023-12-01 杭州海康威视数字技术股份有限公司 Personnel track tracking method, device and system
CN109558831B (en) * 2018-11-27 2023-04-07 成都索贝数码科技股份有限公司 Cross-camera pedestrian positioning method fused with space-time model
JPWO2020179730A1 (en) * 2019-03-04 2020-09-10
CN110379050A (en) * 2019-06-06 2019-10-25 上海学印教育科技有限公司 A kind of gate control method, apparatus and system
CN110428449B (en) * 2019-07-31 2023-08-04 腾讯科技(深圳)有限公司 Target detection tracking method, device, equipment and storage medium
CN110728702B (en) * 2019-08-30 2022-05-20 深圳大学 High-speed cross-camera single-target tracking method and system based on deep learning
CN110706250B (en) * 2019-09-27 2022-04-01 广东博智林机器人有限公司 Object tracking method, device and system and storage medium
CN110942471B (en) * 2019-10-30 2022-07-01 电子科技大学 Long-term target tracking method based on space-time constraint
CN110807804B (en) * 2019-11-04 2023-08-29 腾讯科技(深圳)有限公司 Method, apparatus, device and readable storage medium for target tracking
CN111027462A (en) * 2019-12-06 2020-04-17 长沙海格北斗信息技术有限公司 Pedestrian track identification method across multiple cameras
CN111061825B (en) * 2019-12-10 2020-12-18 武汉大学 Method for identifying matching and correlation of space-time relationship between mask and reloading camouflage identity
CN111738220B (en) * 2020-07-27 2023-09-15 腾讯科技(深圳)有限公司 Three-dimensional human body posture estimation method, device, equipment and medium
CN111815682B (en) * 2020-09-07 2020-12-22 长沙鹏阳信息技术有限公司 Multi-target tracking method based on multi-track fusion
CN112907652B (en) * 2021-01-25 2024-02-02 脸萌有限公司 Camera pose acquisition method, video processing method, display device, and storage medium
CN113223060B (en) * 2021-04-16 2022-04-15 天津大学 Multi-agent cooperative tracking method and device based on data sharing and storage medium
CN113449627B (en) * 2021-06-24 2022-08-09 深兰科技(武汉)股份有限公司 Personnel tracking method based on AI video analysis and related device
CN114299120B (en) * 2021-12-31 2023-08-04 北京银河方圆科技有限公司 Compensation method, registration method, and readable storage medium
CN115631464B (en) * 2022-11-17 2023-04-04 北京航空航天大学 Pedestrian three-dimensional representation method oriented to large space-time target association

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184242A (en) * 2011-05-16 2011-09-14 天津大学 Cross-camera video abstract extracting method
CN102831445A (en) * 2012-08-01 2012-12-19 厦门大学 Target detection method based on semantic Hough transformation and partial least squares
CN105631881A (en) * 2015-12-30 2016-06-01 四川华雁信息产业股份有限公司 Target detection method and apparatus

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226638B (en) * 2007-01-18 2010-05-19 中国科学院自动化研究所 Method and apparatus for standardization of multiple camera system
CN104899894B (en) * 2014-03-05 2017-09-01 南京理工大学 A kind of method that use multiple cameras carries out motion target tracking
CN104376577A (en) * 2014-10-21 2015-02-25 南京邮电大学 Multi-camera multi-target tracking algorithm based on particle filtering
CN104778690B (en) * 2015-04-02 2017-06-06 中国电子科技集团公司第二十八研究所 A kind of multi-target orientation method based on camera network
US10152825B2 (en) * 2015-10-16 2018-12-11 Fyusion, Inc. Augmenting multi-view image data with synthetic objects using IMU and image data
CN106355604B (en) * 2016-08-22 2019-10-18 杭州保新科技有限公司 Tracking image target method and system
CN107240124B (en) * 2017-05-19 2020-07-17 清华大学 Cross-lens multi-target tracking method and device based on space-time constraint

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102184242A (en) * 2011-05-16 2011-09-14 天津大学 Cross-camera video abstract extracting method
CN102831445A (en) * 2012-08-01 2012-12-19 厦门大学 Target detection method based on semantic Hough transformation and partial least squares
CN105631881A (en) * 2015-12-30 2016-06-01 四川华雁信息产业股份有限公司 Target detection method and apparatus

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Multi-camera Multi-object Tracking by Robust Hough-based Homography Projections;Sabine Sternig 等;《2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops)》;20111113;1689-1696 *
Person re-identification in multi-camera networks;Kai J¨ungling 等;《CVPR 2011 WORKSHOPS 》;20110625;55-61 *
刘洋.多摄像头网络环境下的多目标跟踪算法研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2015,(第6期),I138-658. *
多摄像头网络环境下的多目标跟踪算法研究;刘洋;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150615(第6期);I138-658 *

Also Published As

Publication number Publication date
CN107240124A (en) 2017-10-10
WO2018209934A1 (en) 2018-11-22

Similar Documents

Publication Publication Date Title
CN107240124B (en) Cross-lens multi-target tracking method and device based on space-time constraint
WO2021196294A1 (en) Cross-video person location tracking method and system, and device
US9646212B2 (en) Methods, devices and systems for detecting objects in a video
EP2426642B1 (en) Method, device and system for motion detection
US7929728B2 (en) Method and apparatus for tracking a movable object
Chang et al. Tracking Multiple People Under Occlusion Using Multiple Cameras.
US7321386B2 (en) Robust stereo-driven video-based surveillance
Clipp et al. Parallel, real-time visual SLAM
US20130208948A1 (en) Tracking and identification of a moving object from a moving sensor using a 3d model
US20210174539A1 (en) A method for estimating the pose of a camera in the frame of reference of a three-dimensional scene, device, augmented reality system and computer program therefor
WO2018101247A1 (en) Image recognition imaging apparatus
US20210168347A1 (en) Cross-Modality Face Registration and Anti-Spoofing
CN101383005A (en) Method for separating passenger target image and background by auxiliary regular veins
Snidaro et al. Automatic camera selection and fusion for outdoor surveillance under changing weather conditions
Shalnov et al. Convolutional neural network for camera pose estimation from object detections
JP2007219603A (en) Person tracking device, person tracking method and person tracking program
JP2002342762A (en) Object tracing method
CN104112281B (en) Method Of Tracking Objects Using Hyperspectral Imagery
JP2021077039A (en) Image processing apparatus, image processing method, and program
Hadi et al. Fusion of thermal and depth images for occlusion handling for human detection from mobile robot
JP6504711B2 (en) Image processing device
JP7374632B2 (en) Information processing device, information processing method and program
US11669992B2 (en) Data processing
CN110706251B (en) Cross-lens tracking method for pedestrians
Chang Significance of omnidirectional fisheye cameras for feature-based visual SLAM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant