CN111489380A - Target object track analysis method - Google Patents

Target object track analysis method Download PDF

Info

Publication number
CN111489380A
CN111489380A CN202010290588.4A CN202010290588A CN111489380A CN 111489380 A CN111489380 A CN 111489380A CN 202010290588 A CN202010290588 A CN 202010290588A CN 111489380 A CN111489380 A CN 111489380A
Authority
CN
China
Prior art keywords
max
target object
information
camera
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010290588.4A
Other languages
Chinese (zh)
Other versions
CN111489380B (en
Inventor
徐梦
魏晓林
许凯翔
黄平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Tiancheng Intelligent Group Co ltd
Original Assignee
Shanghai Tiancheng Biji Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Tiancheng Biji Technology Co ltd filed Critical Shanghai Tiancheng Biji Technology Co ltd
Priority to CN202010290588.4A priority Critical patent/CN111489380B/en
Publication of CN111489380A publication Critical patent/CN111489380A/en
Application granted granted Critical
Publication of CN111489380B publication Critical patent/CN111489380B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a target object track analysis method, which is characterized in that through object identification, the cross rate of overlapping areas of adjacent frames of a target object in a video frame sequence, such as a vehicle, a person or other moving objects in an intelligent community, is calculated, so that whether the object is the same object or not is determined, and the track analysis of the target object is realized. The target object track analysis method can effectively solve the problem that the target object symbolic information cannot be identified and the identification rate is low, so that the dependence on the identification of the target object symbolic information is reduced, and the efficiency and the accuracy of a target object track analysis mechanism are improved.

Description

Target object track analysis method
Technical Field
The invention relates to a target object track analysis method in a video, in particular to a target object track analysis method with high efficiency and good accuracy.
Background
With the continuous maturity of the concept and specification of the smart community and the application of the intelligent application technology in the smart community, for example, the technologies such as face recognition, face access control, object recognition, vehicle recognition are continuously updated, so that the intelligent degree of the smart community is continuously increased to another level. In the development process of the intelligent community, the track analysis accuracy of the mobile object in the community is very important for the security field of the intelligent community. For example, the travel records of the elderly living alone or the people concerned in the community are analyzed and recorded, so that the real-time detection of the life health of the elderly living alone and the people concerned and the alarm of abnormal conditions are effectively guaranteed, and the method is particularly important for the precise and efficient processing of community assistance work and the timeliness and accuracy of information; and the illegal behavior prejudgment and the real-time detection of the travel track of illegal personnel entering the community and the informatization management of the transactions of vehicle travel, parking and the like in the community are particularly important for the analysis accuracy of the moving track of the moving target object. At present, the moving object is often monitored by means of object identification, identification information identification and the like, dotting and marking analysis is carried out on information on a time axis, and the track of a target object is obtained. However, the discrete point marking method often causes inaccuracy of analysis of the moving track of the object due to the limitation of the recognition rate, and causes that the identification information cannot be recognized and the track of the target object cannot be detected for a long time due to the weather reason and the occlusion of a specific scene. For example, in the license plate recognition, at the entrance and exit of a community brake, because of the adoption of a light supplement and high-quality license plate recognition device, the license plate recognition accuracy is basically close to 100%, however, in the entrance of a vehicle into a community, because a common camera is mostly adopted, due to factors such as a shooting angle and a direction of a vehicle body in the vehicle moving process, the license plate recognition is basically difficult to obtain, intermittent data of a vehicle route action track is lost, and only a track recommendation algorithm can be used, and the most probable line is recommended approximately to make up.
Therefore, there is a need for an improvement to overcome the deficiencies of the prior art.
Disclosure of Invention
The invention aims to solve the problems in the prior art and provides a target object track analysis method with high efficiency and good accuracy.
The technical scheme of the invention is as follows:
a target object trajectory analysis method includes the following steps:
s1, when the object enters the community, the certainty information authentication is performed to generate the object mark information and the initial coordinate information [ [ x ]0min,y0min],[x0max,y0max]];
S2, judging the recording condition of the target object in the cameras in the community and carrying out targeted analysis:
if the target object moves in the same camera picture, analyzing the track of the target object and executing the step S3;
if the target object enters the adjacent area of the pictures shot by the two cameras, analyzing the track of the target object and executing the step S4;
if the target object moves out of one camera picture and no other camera pictures are adjacent to the target object, executing step S5;
s3, setting video frame acquisition frequency according to a target object, selecting video frames according to the video frame acquisition frequency for detection, and performing object detection and cross rate calculation on each detected frame image to perform target object track analysis;
s4, according to the shooting positions of the cameras in the community, the coverage area of the cameras in the whole community is processed in advance, if the coverage areas are adjacent, the adjacent cameras are associated and labeled in the adjacent direction, and when a target object moves from one camera monitoring area to another camera monitoring area along one direction, the following operations are carried out:
s4a, storing track information of a target object in the current camera video;
s4b, acquiring information of target object identification and mark information identification at a critical point of adjacent cameras in the moving direction, and performing accurate information link matching;
s4c, if the information link matching beyond the boundary point is successful, namely the object identification at the boundary point acquires the coordinate information of the target object, linking the object tracks of the front camera and the back camera; otherwise, the link fails;
s5, if the target object moves out of a camera monitoring area and no other camera monitoring area is adjacent to the camera monitoring area or no critical point connection information is matched, namely the step S4c link fails, regularly matching the mark information of which the mark information identification accuracy rate of all objects in the cell is less than 80%, and if the matching is successful, considering the moving track information of the same target object and combining the moving track information; otherwise, the matching fails and no processing is performed.
Preferably, in step S1, the "entering of the target object into the community" is performed to authenticate the certainty information, and target object identification information and initial coordinate information [ [ x ] ], are generated0min,y0min],[x0max,y0max]]"is: when a vehicle enters a community, a license plate is shot under a snapshot camera at the entrance and exit of a brake, and license plate information and vehicle body coordinate information [ x ] corresponding to the snapshot camera of the vehicle body at the brake are generated0min,y0min],[x0max,y0max]]。
Preferably, in step S1, the "entering of the target object into the community" is performed to authenticate the certainty information, and target object identification information and initial coordinate information [ [ x ] ], are generated0min,y0min],[x0max,y0max]]"is: when a pedestrian enters a community, a camera at an entrance and an exit of the community carries out face snapshot to generate face information and corresponding pedestrian coordinate information [ [ x ] of the camera at the entrance and the exit of the community0min,y0min],[x0max,y0max]]。
As a preferred technical solution, in the step S3, the video frame capturing frequency is determined by the number of frames per second x captured by the camera, the moving speed of the target object y m/S, the length of the target object a m, and the crossing rate threshold b: the maximum interval acquisition frequency n ═ (a × b)/(y/x).
As a preferred technical solution, the specific method of "performing object detection and cross rate calculation for each detected frame image to perform target object trajectory analysis" in step S3 includes:
s3a, acquiring a frame coordinate set Object _ registration _ set of the target Object through target Object identification and position coordinate statistics, and detecting that a target exists in the frame coordinate set Object _ registration _ set when the target is identifiedThe type Object enters from the boundary of the camera detection area, whether the mark information and the accuracy of the Object are specified in the system is judged, if yes, the track information Object _ registration _ i [ [ x ] of the target Object is recorded according to the mark information, the preparation rate and the current timeimin,yimin],[ximax,yimax]](ii) a If not, the object is uniquely marked and the coordinate information of the object is marked [ x ]imin,yimin],[ximax,yimax]]The accuracy rate is 0 and the current time information, wherein i is the number of the object identification detected target object;
s3b, identifying and acquiring the Identification _ registration _ set of the frame coordinate set of the target object type with the Identification information through the Identification information; identifying the mark information of each frame of video, if the mark information of the target object can be obtained, recording the mark information of the target type object of the current frame and the object frame coordinate Identification _ registration _ j [ [ x ] of the target type objectjmin,yjmin],[xjmax,yjmax]]Wherein j is the number of the object of the target class detected by the object identification;
s3c, calculating the two frame coordinate sets and the target type Object frame intersection rate in the same frame of video through a two-rectangle intersection rate algorithm for the Object _ registration _ set and the Identification _ registration _ se acquired in the steps S3a and S3 b; and if the crossing rate reaches 80%, determining that the objects are the same target type, comparing the accuracy rate of the marking information, selecting the information with the highest preparation rate as the marking information, and recording the coordinate track of the marking information.
As a further preferable technical solution, in the step S3c, a specific method for "calculating the intersection rate of the two frame coordinate sets and the target type object frame in the same frame video by using a two-rectangle intersection rate algorithm" includes:
a. calculating two rectangular areas s1 and s 2;
b. if (x)imin≥xjmax) or (x)imax≤xjmin) or (y)imin≥yjmax) or (y)imax≤yjmin), the intersection area of the two rectangles is considered to be 0, and the intersection rate is also 0; otherwise, turning to the step c;
c. if two rectangles are overlapped, the overlapped area must be a rectangle, and then the point pair set ([ x ]imin,yimin],[ximin,yimax],[ximin,yjmin],[ximin,yjmax],[ximax,yimin],[ximax,yimax],[ximax,yjmin],[ximax,yjmax],[xjmin,yimin],[xjmin,yimax],[ximin,yjmin],[xjmin,yjmax],[xjmax,yimin],[xjmax,yimax],[xjmax,yjmin],[xjmax,yjmax]) Totally 16 points are used for judging whether boundaries are included in the two matrixes simultaneously, namely if one point P is included, ximin≤xP≤ximax and yimin≤yP≤yimax and xjmin≤xP≤xjmax and yjmin≤yP≤yjmax, then record the point into the point set points _ set of the rewrite area;
d. removing the duplicate of the point set points _ set to obtain four points which are four point coordinates of an overlapped region rectangle;
e. calculating the area s of the overlapping area according to the coordinates of the four points of the rectangle of the overlapping area;
f. calculating the crossing rate W of the overlapping area as (s/s1+ s/s 2)/2; and the crossing rate W of the areas of the overlapping areas is the crossing rate of the target type object frames.
According to the target object track analysis method, through object recognition, the cross rate of the overlapping area of adjacent frames of a target object in a video frame sequence, such as a vehicle, a person or other moving objects in an intelligent community, is calculated, so that whether the object is the same object or not is determined, and the track analysis of the target object is realized. The target object track analysis method can effectively solve the problem that the target object symbolic information cannot be identified and the identification rate is low, so that the dependence on the identification of the target object symbolic information is reduced, and the efficiency and the accuracy of a target object track analysis mechanism are improved.
Drawings
Fig. 1 is a flow chart of a target object trajectory analysis method according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and "a" and "an" generally include at least two, but do not exclude at least one, unless the context clearly dictates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a commodity or system that includes the element.
Fig. 1 shows a target object trajectory analysis method according to the present invention, which includes the following steps:
s1, when the object enters the community, the certainty information authentication is performed to generate the object mark information and the initial coordinate information [ [ x ]0min,y0min],[x0max,y0max]](ii) a For example: when a vehicle enters a community, license plate information accuracy recording and authentication can be carried out at a gate exit, and when pedestrians enter the community, face snapshot is also needed.
S2, judging the recording condition of the target object in the cameras in the community and carrying out pertinence analysis, wherein the distribution of the cameras in the community has a certain irregular shape, so that all roads in the community are not covered, and the object track analysis is carried out by using a cross rate algorithm, and the method is divided into three conditions:
if the target object moves in the same camera picture, analyzing the track of the target object and executing the step S3;
if the target object enters the adjacent area of the pictures shot by the two cameras, analyzing the track of the target object and executing the step S4;
if the target object moves out of one camera picture and no other camera pictures are adjacent to the target object, executing step S5;
s3, setting video frame acquisition frequency according to a target object, selecting video frames according to the video frame acquisition frequency for detection, and performing object detection and cross rate calculation on each detected frame image to perform target object track analysis;
s4, according to the shooting positions of the cameras in the community, the coverage area of the cameras in the whole community is processed in advance, if the coverage areas are adjacent, the adjacent cameras are associated and labeled in the adjacent direction, and when a target object moves from one camera monitoring area to another camera monitoring area along one direction, the following operations are carried out:
s4a, storing track information of a target object in the current camera video;
s4b, acquiring information of target object identification and mark information identification at a critical point of adjacent cameras in the moving direction, and performing accurate information link matching;
s4c, if the information link matching beyond the boundary point is successful, namely the object identification at the boundary point acquires the coordinate information of the target object, linking the object tracks of the front camera and the back camera; otherwise, the link fails;
s5, if the target object moves out of a camera monitoring area and no other camera monitoring area is adjacent to the camera monitoring area or no critical point connection information is matched, namely the step S4c link fails, regularly matching the mark information of which the mark information identification accuracy rate of all objects in the cell is less than 80%, and if the matching is successful, considering the moving track information of the same target object and combining the moving track information; otherwise, the matching fails and no processing is performed.
In practical applications, when the target object is a vehicle, the deterministic information authentication is performed to generate target object marker information and initial coordinate information [ [ x ] when the target object enters the community as described in S10min,y0min],[x0max,y0max]]"is: when a vehicle enters a community, a license plate is shot under a snapshot camera at the entrance and exit of a brake, and license plate information and vehicle body coordinate information [ x ] corresponding to the snapshot camera of the vehicle body at the brake are generated0min,y0min],[x0max,y0max]]. When the target object is a pedestrian, the certainty information authentication is performed in the step S1 "when the target object enters the community,generating target object marker information and initial coordinate information [ [ x ]0min,y0min],[x0max,y0max]]"is: when a pedestrian enters a community, a camera at an entrance and an exit of the community carries out face snapshot to generate face information and corresponding pedestrian coordinate information [ [ x ] of the camera at the entrance and the exit of the community0min,y0min],[x0max,y0max]]。
As a preferable scheme, in the step S3, the video frame capturing frequency is determined by the number of frames per second x captured by the camera, the moving speed of the target object y meters/second, the length of the target object a meters, and the crossing rate threshold b: the maximum interval acquisition frequency n ═ (a × b)/(y/x). For example, the target object is a vehicle, the video frame of the camera is about 25 frames/s, the moving speed of the vehicle in the cell is limited to 20km/h (i.e. 5.6m/s), the length of the vehicle body is 3.8 m to 4.3 m, and the width is 1.6 m to 1.8 m, so if the threshold value of the intersection rate is selected to be 30%, the maximum interval acquisition frequency n is (0.3 × (3.8)/(5.6/25) × (5.1), the maximum interval acquisition frequency is not more than 5.1, and one frame can be selected for detection every 4 frames, thereby reducing the amount of calculation and ensuring the detection effect.
As a preferable scheme, the specific method of "performing object detection and cross rate calculation for each detected frame image to perform target object trajectory analysis" in step S3 includes:
s3a, acquiring a frame coordinate set Object _ registration _ set of the target Object through target Object identification and position coordinate statistics, judging whether the designated mark information and accuracy of the Object exist in the system when the Object identification detects that the target type Object enters from the boundary of the camera detection area, and recording the track information Object _ registration _ i [ [ x ] of the target Object according to the mark information, the preparation rate and the current time if the mark information and the accuracy of the designated Object exist in the systemimin,yimin],[ximax,yimax]](ii) a If not, the object is uniquely marked and the coordinate information of the object is marked [ x ]imin,yimin],[ximax,yimax]]The accuracy rate is 0 and the current time information, wherein i is the number of the object identification detected target object;
s3b, identifying and acquiring the Identification _ registration _ set of the frame coordinate set of the target object type with the Identification information through the Identification information; identifying the mark information of each frame of video, if the mark information of the target object can be obtained, recording the mark information of the target type object of the current frame and the object frame coordinate Identification _ registration _ j [ [ x ] of the target type objectjmin,yjmin],[xjmax,yjmax]]Wherein j is the number of the object of the target class detected by the object identification;
s3c, calculating the two frame coordinate sets and the target type Object frame intersection rate in the same frame of video through a two-rectangle intersection rate algorithm for the Object _ registration _ set and the Identification _ registration _ se acquired in the steps S3a and S3 b; and if the crossing rate reaches 80%, determining that the objects are the same target type, comparing the accuracy rate of the marking information, selecting the information with the highest preparation rate as the marking information, and recording the coordinate track of the marking information.
In step S3c, the specific method for calculating the intersection rate of the two frame coordinate sets and the target type object frame in the same frame of video through the two rectangle intersection rate algorithm includes:
a. calculating two rectangular areas s1 and s 2;
b. if (x)imin≥xjmax) or (x)imax≤xjmin) or (y)imin≥yjmax) or (y)imax≤yjmin), the intersection area of the two rectangles is considered to be 0, and the intersection rate is also 0; otherwise, turning to the step c;
c. if two rectangles are overlapped, the overlapped area must be a rectangle, and then the point pair set ([ x ]imin,yimin],[ximin,yimax],[ximin,yjmin],[ximin,yjmax],[ximax,yimin],[ximax,yimax],[ximax,yjmin],[ximax,yjmax],[xjmin,yimin],[xjmin,yimax],[ximin,yjmin],[xjmin,yjmax],[xjmax,yimin],[xjmax,yimax],[xjmax,yjmin],[xjmax,yjmax]) Totally 16 points are used for judging whether boundaries are included in the two matrixes simultaneously, namely if one point P is included, ximin≤xP≤ximax and yimin≤yP≤yimax and xjmin≤xP≤xjmax and yjmin≤yP≤yjmax, then record the point into the point set points _ set of the rewrite area;
d. removing the duplicate of the point set points _ set to obtain four points which are four point coordinates of an overlapped region rectangle;
e. calculating the area s of the overlapping area according to the coordinates of the four points of the rectangle of the overlapping area;
f. calculating the crossing rate W of the overlapping area as (s/s1+ s/s 2)/2; and the crossing rate W of the areas of the overlapping areas is the crossing rate of the target type object frames.
According to the target object track analysis method, through object recognition, the cross rate of the overlapping area of adjacent frames of a target object in a video frame sequence, such as a vehicle, a person or other moving objects in an intelligent community, is calculated, so that whether the object is the same object or not is determined, and the track analysis of the target object is realized. The target object track analysis method can effectively solve the problem that the target object symbolic information cannot be identified and the identification rate is low, so that the dependence on the identification of the target object symbolic information is reduced, and the efficiency and the accuracy of a target object track analysis mechanism are improved.
In summary, the embodiments of the present invention are merely exemplary and should not be construed as limiting the scope of the invention. All equivalent changes and modifications made according to the content of the claims of the present invention should fall within the technical scope of the present invention.

Claims (6)

1. A target object trajectory analysis method is characterized in that: the method comprises the following steps:
s1, when the target object enters the community,performing deterministic information authentication to generate target object marker information and initial coordinate information [ [ x ]0min,y0min],[x0max,y0max]];
S2, judging the recording condition of the target object in the cameras in the community and carrying out targeted analysis:
if the target object moves in the same camera picture, analyzing the track of the target object and executing the step S3;
if the target object enters the adjacent area of the pictures shot by the two cameras, analyzing the track of the target object and executing the step S4;
if the target object moves out of one camera picture and no other camera pictures are adjacent to the target object, executing step S5;
s3, setting video frame acquisition frequency according to a target object, selecting video frames according to the video frame acquisition frequency for detection, and performing object detection and cross rate calculation on each detected frame image to perform target object track analysis;
s4, according to the shooting positions of the cameras in the community, the coverage area of the cameras in the whole community is processed in advance, if the coverage areas are adjacent, the adjacent cameras are associated and labeled in the adjacent direction, and when a target object moves from one camera monitoring area to another camera monitoring area along one direction, the following operations are carried out:
s4a, storing track information of a target object in the current camera video;
s4b, acquiring information of target object identification and mark information identification at a critical point of adjacent cameras in the moving direction, and performing accurate information link matching;
s4c, if the information link matching beyond the boundary point is successful, namely the object identification at the boundary point acquires the coordinate information of the target object, linking the object tracks of the front camera and the back camera; otherwise, the link fails;
s5, if the target object moves out of a camera monitoring area and no other camera monitoring area is adjacent to the camera monitoring area or no critical point connection information is matched, namely the step S4c link fails, regularly matching the mark information of which the mark information identification accuracy rate of all objects in the cell is less than 80%, and if the matching is successful, considering the moving track information of the same target object and combining the moving track information; otherwise, the matching fails and no processing is performed.
2. The target object trajectory analysis method of claim 1, wherein: in step S1, when the target object enters the community, the certainty information authentication is performed to generate target object marker information and initial coordinate information [ [ x ] ]0min,y0min],[x0max,y0max]]"is: when a vehicle enters a community, a license plate is shot under a snapshot camera at the entrance and exit of a brake, and license plate information and vehicle body coordinate information [ x ] corresponding to the snapshot camera of the vehicle body at the brake are generated0min,y0min],[x0max,y0max]]。
3. The target object trajectory analysis method of claim 1, wherein: in step S1, when the target object enters the community, the certainty information authentication is performed to generate target object marker information and initial coordinate information [ [ x ] ]0min,y0min],[x0max,y0max]]"is: when a pedestrian enters a community, a camera at an entrance and an exit of the community carries out face snapshot to generate face information and corresponding pedestrian coordinate information [ [ x ] of the camera at the entrance and the exit of the community0min,y0min],[x0max,y0max]]。
4. The target object trajectory analysis method of claim 1, wherein: in the step S3, the video frame capturing frequency is determined by the number of frames per second x shot by the camera, the moving speed of the target object y m/S, the length of the target object a m, and the crossing rate threshold b: the maximum interval acquisition frequency n ═ (a × b)/(y/x).
5. The target object trajectory analysis method of claim 1, wherein: the specific method of "performing object detection and cross rate calculation for each detected frame image to perform target object trajectory analysis" in step S3 is as follows:
s3a, acquiring a frame coordinate set Object _ registration _ set of the target Object through target Object identification and position coordinate statistics, judging whether the designated mark information and accuracy of the Object exist in the system when the Object identification detects that the target type Object enters from the boundary of the camera detection area, and recording the track information Object _ registration _ i [ [ x ] of the target Object according to the mark information, the preparation rate and the current time if the mark information and the accuracy of the designated Object exist in the systemimin,yimin],[ximax,yimax]](ii) a If not, the object is uniquely marked and the coordinate information of the object is marked [ x ]imin,yimin],[ximax,yimax]]The accuracy rate is 0 and the current time information, wherein i is the number of the object identification detected target object;
s3b, identifying and acquiring the Identification _ registration _ set of the frame coordinate set of the target object type with the Identification information through the Identification information; identifying the mark information of each frame of video, if the mark information of the target object can be obtained, recording the mark information of the target type object of the current frame and the object frame coordinate Identification _ registration _ j [ [ x ] of the target type objectjmin,yjmin],[xjmax,yjmax]]Wherein j is the number of the object of the target class detected by the object identification;
s3c, calculating the two frame coordinate sets and the target type Object frame intersection rate in the same frame of video through a two-rectangle intersection rate algorithm for the Object _ registration _ set and the Identification _ registration _ se acquired in the steps S3a and S3 b; and if the crossing rate reaches 80%, determining that the objects are the same target type, comparing the accuracy rate of the marking information, selecting the information with the highest preparation rate as the marking information, and recording the coordinate track of the marking information.
6. The target object trajectory analysis method of claim 5, wherein: the specific method for calculating the intersection rates of the two frame coordinate sets and the target type object frame in the same frame video through the two rectangle intersection rate algorithm in the step S3c is as follows:
a. calculating two rectangular areas s1 and s 2;
b. if (x)imin≥xjmax) or (x)imax≤xjmin) or (y)imin≥yjmax) or (y)imax≤yjmin), the intersection area of the two rectangles is considered to be 0, and the intersection rate is also 0; otherwise, turning to the step c;
c. if two rectangles are overlapped, the overlapped area must be a rectangle, and then the point pair set ([ x ]imin,yimin],[ximin,yimax],[ximin,yjmin],[ximin,yjmax],[ximax,yimin],[ximax,yimax],[ximax,yjmin],[ximax,yjmax],[xjmin,yimin],[xjmin,yimax],[ximin,yjmin],[xjmin,yjmax],[xjmax,yimin],[xjmax,yimax],[xjmax,yjmin],[xjmax,yjmax]) Totally 16 points are used for judging whether boundaries are included in the two matrixes simultaneously, namely if one point P is included, ximin≤xP≤ximax and yimin≤yP≤yimax and xjmin≤xP≤xjmax and yjmin≤yP≤yjmax, then record the point into the point set points _ set of the rewrite area;
d. removing the duplicate of the point set points _ set to obtain four points which are four point coordinates of an overlapped region rectangle;
e. calculating the area s of the overlapping area according to the coordinates of the four points of the rectangle of the overlapping area;
f. calculating the crossing rate W of the overlapping area as (s/s1+ s/s 2)/2; and the crossing rate W of the areas of the overlapping areas is the crossing rate of the target type object frames.
CN202010290588.4A 2020-04-14 2020-04-14 Target object track analysis method Active CN111489380B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010290588.4A CN111489380B (en) 2020-04-14 2020-04-14 Target object track analysis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010290588.4A CN111489380B (en) 2020-04-14 2020-04-14 Target object track analysis method

Publications (2)

Publication Number Publication Date
CN111489380A true CN111489380A (en) 2020-08-04
CN111489380B CN111489380B (en) 2022-04-12

Family

ID=71812756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010290588.4A Active CN111489380B (en) 2020-04-14 2020-04-14 Target object track analysis method

Country Status (1)

Country Link
CN (1) CN111489380B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114220077A (en) * 2022-02-21 2022-03-22 金叶仪器(山东)有限公司 Method for realizing object quantity statistics and moving direction monitoring based on monitoring equipment
CN115018433A (en) * 2022-08-10 2022-09-06 四川港投新通道物流产业投资集团有限公司 Wine supply chain monitoring method, device, equipment and medium
CN115601686A (en) * 2022-12-09 2023-01-13 浙江莲荷科技有限公司(Cn) Method, device and system for confirming delivery of articles

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846374A (en) * 2016-12-21 2017-06-13 大连海事大学 The track calculating method of vehicle under multi-cam scene
CN107483889A (en) * 2017-08-24 2017-12-15 北京融通智慧科技有限公司 The tunnel monitoring system of wisdom building site control platform
CN108875588A (en) * 2018-05-25 2018-11-23 武汉大学 Across camera pedestrian detection tracking based on deep learning
CN110378931A (en) * 2019-07-10 2019-10-25 成都数之联科技有限公司 A kind of pedestrian target motion track acquisition methods and system based on multi-cam
CN110428443A (en) * 2019-08-11 2019-11-08 上海天诚比集科技有限公司 A kind of intelligence community Vehicle tracing method
CN110619277A (en) * 2019-08-15 2019-12-27 青岛文达通科技股份有限公司 Multi-community intelligent deployment and control method and system
CN110619657A (en) * 2019-08-15 2019-12-27 青岛文达通科技股份有限公司 Multi-camera linkage multi-target tracking method and system for smart community

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106846374A (en) * 2016-12-21 2017-06-13 大连海事大学 The track calculating method of vehicle under multi-cam scene
CN107483889A (en) * 2017-08-24 2017-12-15 北京融通智慧科技有限公司 The tunnel monitoring system of wisdom building site control platform
CN108875588A (en) * 2018-05-25 2018-11-23 武汉大学 Across camera pedestrian detection tracking based on deep learning
CN110378931A (en) * 2019-07-10 2019-10-25 成都数之联科技有限公司 A kind of pedestrian target motion track acquisition methods and system based on multi-cam
CN110428443A (en) * 2019-08-11 2019-11-08 上海天诚比集科技有限公司 A kind of intelligence community Vehicle tracing method
CN110619277A (en) * 2019-08-15 2019-12-27 青岛文达通科技股份有限公司 Multi-community intelligent deployment and control method and system
CN110619657A (en) * 2019-08-15 2019-12-27 青岛文达通科技股份有限公司 Multi-camera linkage multi-target tracking method and system for smart community

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈渠: "基于深度学习的多摄像头协同目标跟踪方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅰ辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114220077A (en) * 2022-02-21 2022-03-22 金叶仪器(山东)有限公司 Method for realizing object quantity statistics and moving direction monitoring based on monitoring equipment
CN115018433A (en) * 2022-08-10 2022-09-06 四川港投新通道物流产业投资集团有限公司 Wine supply chain monitoring method, device, equipment and medium
CN115601686A (en) * 2022-12-09 2023-01-13 浙江莲荷科技有限公司(Cn) Method, device and system for confirming delivery of articles

Also Published As

Publication number Publication date
CN111489380B (en) 2022-04-12

Similar Documents

Publication Publication Date Title
CN108062349B (en) Video monitoring method and system based on video structured data and deep learning
CN111489380B (en) Target object track analysis method
CN108053427B (en) Improved multi-target tracking method, system and device based on KCF and Kalman
CN108052859B (en) Abnormal behavior detection method, system and device based on clustering optical flow characteristics
Albiol et al. Detection of parked vehicles using spatiotemporal maps
KR102122859B1 (en) Method for tracking multi target in traffic image-monitoring-system
US20080212099A1 (en) Method for counting people passing through a gate
CN108986472B (en) Method and device for monitoring vehicle turning round
CN111626275B (en) Abnormal parking detection method based on intelligent video analysis
CN111738240A (en) Region monitoring method, device, equipment and storage medium
CN111325048B (en) Personnel gathering detection method and device
CN106558224B (en) A kind of traffic intelligent monitoring and managing method based on computer vision
CN102855508B (en) Opening type campus anti-following system
CN104200466A (en) Early warning method and camera
Chang et al. Video analytics in smart transportation for the AIC'18 challenge
CN112733598A (en) Vehicle law violation determination method and device, computer equipment and storage medium
CN112084892B (en) Road abnormal event detection management device and method thereof
CN113903008A (en) Ramp exit vehicle violation identification method based on deep learning and trajectory tracking
KR20210158037A (en) Method for tracking multi target in traffic image-monitoring-system
Huang et al. A real-time and color-based computer vision for traffic monitoring system
CN112132048A (en) Community patrol analysis method and system based on computer vision
Mehboob et al. Trajectory based vehicle counting and anomalous event visualization in smart cities
CN112104838B (en) Image distinguishing method, monitoring camera and monitoring camera system thereof
WO2018209470A1 (en) License plate identification method and system
CN105227918A (en) A kind of intelligent control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231205

Address after: No.2252, Shaling Road, Shatou, Jiangsu Province

Patentee after: JIANGSU TC SMART SYSTEMS GROUP Co.,Ltd.

Address before: Room 904, building 2, No. 618, Guangxing Road, Songjiang District, Shanghai 201613

Patentee before: SHANGHAI TIANCHENG BIJI TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
CP03 Change of name, title or address

Address after: No.2252, Shaling Road, Shatou, Jiangsu Province

Patentee after: Jiangsu Tiancheng Intelligent Group Co.,Ltd.

Country or region after: China

Address before: No.2252, Shaling Road, Shatou, Jiangsu Province

Patentee before: JIANGSU TC SMART SYSTEMS GROUP Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address