Disclosure of Invention
The invention aims to solve the problems in the prior art and provides a target object track analysis method with high efficiency and good accuracy.
The technical scheme of the invention is as follows:
a target object trajectory analysis method includes the following steps:
s1, when the object enters the community, the certainty information authentication is performed to generate the object mark information and the initial coordinate information [ [ x ]0min,y0min],[x0max,y0max]];
S2, judging the recording condition of the target object in the cameras in the community and carrying out targeted analysis:
if the target object moves in the same camera picture, analyzing the track of the target object and executing the step S3;
if the target object enters the adjacent area of the pictures shot by the two cameras, analyzing the track of the target object and executing the step S4;
if the target object moves out of one camera picture and no other camera pictures are adjacent to the target object, executing step S5;
s3, setting video frame acquisition frequency according to a target object, selecting video frames according to the video frame acquisition frequency for detection, and performing object detection and cross rate calculation on each detected frame image to perform target object track analysis;
s4, according to the shooting positions of the cameras in the community, the coverage area of the cameras in the whole community is processed in advance, if the coverage areas are adjacent, the adjacent cameras are associated and labeled in the adjacent direction, and when a target object moves from one camera monitoring area to another camera monitoring area along one direction, the following operations are carried out:
s4a, storing track information of a target object in the current camera video;
s4b, acquiring information of target object identification and mark information identification at a critical point of adjacent cameras in the moving direction, and performing accurate information link matching;
s4c, if the information link matching beyond the boundary point is successful, namely the object identification at the boundary point acquires the coordinate information of the target object, linking the object tracks of the front camera and the back camera; otherwise, the link fails;
s5, if the target object moves out of a camera monitoring area and no other camera monitoring area is adjacent to the camera monitoring area or no critical point connection information is matched, namely the step S4c link fails, regularly matching the mark information of which the mark information identification accuracy rate of all objects in the cell is less than 80%, and if the matching is successful, considering the moving track information of the same target object and combining the moving track information; otherwise, the matching fails and no processing is performed.
Preferably, in step S1, the "entering of the target object into the community" is performed to authenticate the certainty information, and target object identification information and initial coordinate information [ [ x ] ], are generated0min,y0min],[x0max,y0max]]"is: when a vehicle enters a community, a camera is snapped at the entrance and exit of a brake to shoot a license plate, license plate information is generated, and the vehicle body is in the communityVehicle body coordinate information [ x ] corresponding to snap camera at brake0min,y0min],[x0max,y0max]]。
Preferably, in step S1, the "entering of the target object into the community" is performed to authenticate the certainty information, and target object identification information and initial coordinate information [ [ x ] ], are generated0min,y0min],[x0max,y0max]]"is: when a pedestrian enters a community, a camera at an entrance and an exit of the community carries out face snapshot to generate face information and corresponding pedestrian coordinate information [ [ x ] of the camera at the entrance and the exit of the community0min,y0min],[x0max,y0max]]。
As a preferred technical solution, in the step S3, the video frame capturing frequency is determined by the number of frames per second x captured by the camera, the moving speed of the target object y m/S, the length of the target object a m, and the crossing rate threshold b: the maximum interval acquisition frequency n ═ (a × b)/(y/x).
As a preferred technical solution, the specific method of "performing object detection and cross rate calculation for each detected frame image to perform target object trajectory analysis" in step S3 includes:
s3a, acquiring a frame coordinate set Object _ registration _ set of the target Object through target Object identification and position coordinate statistics, judging whether the designated mark information and accuracy of the Object exist in the system when the Object identification detects that the target type Object enters from the boundary of the camera detection area, and recording the track information Object _ registration _ i [ [ x ] of the target Object according to the mark information, the preparation rate and the current time if the mark information and the accuracy of the designated Object exist in the systemimin,yimin],[ximax,yimax]](ii) a If not, the object is uniquely marked and the coordinate information of the object is marked [ x ]imin,yimin],[ximax,yimax]]The accuracy rate is 0 and the current time information, wherein i is the number of the object identification detected target object;
s3b, identifying and acquiring the Identification-reco of the frame coordinate set of the target object type with the Identification information through the Identification informationA determination _ repetition _ set; identifying the mark information of each frame of video, if the mark information of the target object can be obtained, recording the mark information of the target type object of the current frame and the object frame coordinate Identification _ registration _ j [ [ x ] of the target type objectjmin,yjmin],[xjmax,yjmax]]Wherein j is the number of the object of the target class detected by the object identification;
s3c, calculating the two frame coordinate sets and the target type Object frame intersection rate in the same frame of video through a two-rectangle intersection rate algorithm for the Object _ registration _ set and the Identification _ registration _ se acquired in the steps S3a and S3 b; and if the crossing rate reaches 80%, determining that the objects are the same target type, comparing the accuracy rate of the marking information, selecting the information with the highest preparation rate as the marking information, and recording the coordinate track of the marking information.
As a further preferable technical solution, in the step S3c, a specific method for "calculating the intersection rate of the two frame coordinate sets and the target type object frame in the same frame video by using a two-rectangle intersection rate algorithm" includes:
a. calculating two rectangular areas s1 and s 2;
b. if (x)imin≥xjmax) or (x)imax≤xjmin) or (y)imin≥yjmax) or (y)imax≤yjmin), the intersection area of the two rectangles is considered to be 0, and the intersection rate is also 0; otherwise, turning to the step c;
c. if two rectangles are overlapped, the overlapped area must be a rectangle, and then the point pair set ([ x ]imin,yimin],[ximin,yimax],[ximin,yjmin],[ximin,yjmax],[ximax,yimin],[ximax,yimax],[ximax,yjmin],[ximax,yjmax],[xjmin,yimin],[xjmin,yimax],[ximin,yjmin],[xj min,yjmax],[xjmax,yimin],[xjmax,yimax],[xjmax,yjmin],[xjmax,yjmax]) Totally 16 points are used for judging whether boundaries are included in the two matrixes simultaneously, namely if one point P is included, ximin≤xP≤ximax and yimin≤yP≤yimax and xjmin≤xP≤xjmax and yjmin≤yP≤yjmax, then record the point into the point set points _ set of the rewrite area;
d. removing the duplicate of the point set points _ set to obtain four points which are four point coordinates of an overlapped region rectangle;
e. calculating the area s of the overlapping area according to the coordinates of the four points of the rectangle of the overlapping area;
f. calculating the crossing rate W of the overlapping area as (s/s1+ s/s 2)/2; and the crossing rate W of the areas of the overlapping areas is the crossing rate of the target type object frames.
According to the target object track analysis method, through object recognition, the cross rate of the overlapping area of adjacent frames of a target object in a video frame sequence, such as a vehicle, a person or other moving objects in an intelligent community, is calculated, so that whether the object is the same object or not is determined, and the track analysis of the target object is realized. The target object track analysis method can effectively solve the problem that the target object symbolic information cannot be identified and the identification rate is low, so that the dependence on the identification of the target object symbolic information is reduced, and the efficiency and the accuracy of a target object track analysis mechanism are improved.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and "a" and "an" generally include at least two, but do not exclude at least one, unless the context clearly dictates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a commodity or system that includes the element.
Fig. 1 shows a target object trajectory analysis method according to the present invention, which includes the following steps:
s1, when the object enters the community, the certainty information authentication is performed to generate the object mark information and the initial coordinate information [ [ x ]0min,y0min],[x0max,y0max]](ii) a For example: when a vehicle enters a community, license plate information accuracy recording and authentication can be carried out at a gate exit, and when pedestrians enter the community, face snapshot is also needed.
S2, judging the recording condition of the target object in the cameras in the community and carrying out pertinence analysis, wherein the distribution of the cameras in the community has a certain irregular shape, so that all roads in the community are not covered, and the object track analysis is carried out by using a cross rate algorithm, and the method is divided into three conditions:
if the target object moves in the same camera picture, analyzing the track of the target object and executing the step S3;
if the target object enters the adjacent area of the pictures shot by the two cameras, analyzing the track of the target object and executing the step S4;
if the target object moves out of one camera picture and no other camera pictures are adjacent to the target object, executing step S5;
s3, setting video frame acquisition frequency according to a target object, selecting video frames according to the video frame acquisition frequency for detection, and performing object detection and cross rate calculation on each detected frame image to perform target object track analysis;
s4, according to the shooting positions of the cameras in the community, the coverage area of the cameras in the whole community is processed in advance, if the coverage areas are adjacent, the adjacent cameras are associated and labeled in the adjacent direction, and when a target object moves from one camera monitoring area to another camera monitoring area along one direction, the following operations are carried out:
s4a, storing track information of a target object in the current camera video;
s4b, acquiring information of target object identification and mark information identification at a critical point of adjacent cameras in the moving direction, and performing accurate information link matching;
s4c, if the information link matching beyond the boundary point is successful, namely the object identification at the boundary point acquires the coordinate information of the target object, linking the object tracks of the front camera and the back camera; otherwise, the link fails;
s5, if the target object moves out of a camera monitoring area and no other camera monitoring area is adjacent to the camera monitoring area or no critical point connection information is matched, namely the step S4c link fails, regularly matching the mark information of which the mark information identification accuracy rate of all objects in the cell is less than 80%, and if the matching is successful, considering the moving track information of the same target object and combining the moving track information; otherwise, the matching fails and no processing is performed.
In practical applications, when the target object is a vehicle, the deterministic information authentication is performed to generate target object marker information and initial coordinate information [ [ x ] when the target object enters the community as described in S10min,y0min],[x0max,y0max]]"is: when a vehicle enters a community, a license plate is shot under a snapshot camera at the entrance and exit of a brake, and license plate information and vehicle body coordinate information [ x ] corresponding to the snapshot camera of the vehicle body at the brake are generated0min,y0min],[x0max,y0max]]. When the target object is a pedestrian, the certainty information authentication is performed to generate target object identification information and initial coordinate information [ [ x ] [ when the target object enters the community region in step S10min,y0min],[x0max,y0max]]"is: when a pedestrian enters a community, a camera at an entrance and an exit of the community carries out face snapshot to generate face information and corresponding pedestrian coordinate information [ [ x ] of the camera at the entrance and the exit of the community0min,y0min],[x0max,y0max]]。
As a preferable scheme, in the step S3, the video frame capturing frequency is determined by the number of frames per second x captured by the camera, the moving speed of the target object y meters/second, the length of the target object a meters, and the crossing rate threshold b: the maximum interval acquisition frequency n ═ (a × b)/(y/x). For example, the target object is a vehicle, the video frame of the camera is about 25 frames/s, the moving speed of the vehicle in the cell is limited to 20km/h (i.e. 5.6m/s), the length of the vehicle body is 3.8 m to 4.3 m, and the width is 1.6 m to 1.8 m, so if the threshold value of the intersection rate is selected to be 30%, the maximum interval acquisition frequency n is (0.3 × (3.8)/(5.6/25) × (5.1), the maximum interval acquisition frequency is not more than 5.1, and one frame can be selected for detection every 4 frames, thereby reducing the amount of calculation and ensuring the detection effect.
As a preferable scheme, the specific method of "performing object detection and cross rate calculation for each detected frame image to perform target object trajectory analysis" in step S3 includes:
s3a, acquiring a frame coordinate set Object _ registration _ set of the target Object through target Object identification and position coordinate statistics, judging whether the designated mark information and accuracy of the Object exist in the system when the Object identification detects that the target type Object enters from the boundary of the camera detection area, and recording the track information Object _ registration _ i [ [ x ] of the target Object according to the mark information, the preparation rate and the current time if the mark information and the accuracy of the designated Object exist in the systemimin,yimin],[ximax,yimax]](ii) a If not, the object is uniquely marked and the coordinate information of the object is marked [ x ]imin,yimin],[ximax,yimax]]The accuracy rate is 0 and the current time information, wherein i is the number of the object identification detected target object;
s3b, identifying and acquiring the Identification _ registration _ set of the frame coordinate set of the target object type with the Identification information through the Identification information; identifying the mark information of each frame of video, if the mark information of the target object can be obtained, recording the mark information of the target type object of the current frame and the object frame coordinate Identification _ registration _ j [ [ x ] of the target type objectjmin,yjmin],[xjmax,yjmax]]Wherein j is the number of the object of the target class detected by the object identification;
s3c, calculating the two frame coordinate sets and the target type Object frame intersection rate in the same frame of video through a two-rectangle intersection rate algorithm for the Object _ registration _ set and the Identification _ registration _ se acquired in the steps S3a and S3 b; and if the crossing rate reaches 80%, determining that the objects are the same target type, comparing the accuracy rate of the marking information, selecting the information with the highest preparation rate as the marking information, and recording the coordinate track of the marking information.
In step S3c, the specific method for calculating the intersection rate of the two frame coordinate sets and the target type object frame in the same frame of video through the two rectangle intersection rate algorithm includes:
a. calculating two rectangular areas s1 and s 2;
b. if (x)imin≥xjmax) or (x)imax≤xjmin) or (y)imin≥yjmax) or (y)imax≤yjmin), the intersection area of the two rectangles is considered to be 0, and the intersection rate is also 0; otherwise, turning to the step c;
c. if two rectangles are overlapped, the overlapped area must be a rectangle, and then the point pair set ([ x ]imin,yimin],[ximin,yimax],[ximin,yjmin],[ximin,yjmax],[ximax,yimin],[ximax,yimax],[ximax,yjmin],[ximax,yjmax],[xjmin,yimin],[xjmin,yimax],[ximin,yjmin],[xj min,yjmax],[xjmax,yimin],[xjmax,yimax],[xjmax,yjmin],[xjmax,yjmax]) Totally 16 points are used for judging whether boundaries are included in the two matrixes simultaneously, namely if one point P is included, ximin≤xP≤ximax and yimin≤yP≤yimax and xjmin≤xP≤xjmax and yjmin≤yP≤yjmax, then record the point into the point set points _ set of the rewrite area;
d. removing the duplicate of the point set points _ set to obtain four points which are four point coordinates of an overlapped region rectangle;
e. calculating the area s of the overlapping area according to the coordinates of the four points of the rectangle of the overlapping area;
f. calculating the crossing rate W of the overlapping area as (s/s1+ s/s 2)/2; and the crossing rate W of the areas of the overlapping areas is the crossing rate of the target type object frames.
According to the target object track analysis method, through object recognition, the cross rate of the overlapping area of adjacent frames of a target object in a video frame sequence, such as a vehicle, a person or other moving objects in an intelligent community, is calculated, so that whether the object is the same object or not is determined, and the track analysis of the target object is realized. The target object track analysis method can effectively solve the problem that the target object symbolic information cannot be identified and the identification rate is low, so that the dependence on the identification of the target object symbolic information is reduced, and the efficiency and the accuracy of a target object track analysis mechanism are improved.
In summary, the embodiments of the present invention are merely exemplary and should not be construed as limiting the scope of the invention. All equivalent changes and modifications made according to the content of the claims of the present invention should fall within the technical scope of the present invention.