CN111242077A - Figure tracking method, system and server - Google Patents

Figure tracking method, system and server Download PDF

Info

Publication number
CN111242077A
CN111242077A CN202010066174.3A CN202010066174A CN111242077A CN 111242077 A CN111242077 A CN 111242077A CN 202010066174 A CN202010066174 A CN 202010066174A CN 111242077 A CN111242077 A CN 111242077A
Authority
CN
China
Prior art keywords
person
detected
features
camera group
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010066174.3A
Other languages
Chinese (zh)
Inventor
约翰·阿尔伯特·卡迈克尔
陆博
陈茜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orca Data Technology Xian Co Ltd
Original Assignee
Orca Data Technology Xian Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orca Data Technology Xian Co Ltd filed Critical Orca Data Technology Xian Co Ltd
Priority to CN202010066174.3A priority Critical patent/CN111242077A/en
Publication of CN111242077A publication Critical patent/CN111242077A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Library & Information Science (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

According to the person tracking method, the person tracking system and the person tracking server provided by the embodiment of the invention, the camera groups in the area are connected with the person database to establish the area camera monitoring network, the positions of the persons to be detected, which are shot by the camera groups, in the database can be quickly inquired according to the characteristics to be detected of the persons to be detected, and the positions of the camera groups where the persons to be detected appear are connected by using time sequence, so that the moving track of the persons to be detected can be quickly obtained. On the other hand, in the embodiment of the present invention, the front image and the back image of the person to be detected are respectively obtained by using the camera group, and the features of the person to be detected are matched with the person in the monitoring image information by using any one of an intersection-comparison function target detection method, a face recognition embedding method for tracking face detection, and a gait similarity matching method, wherein the functions can be realized by any one of the three methods.

Description

Figure tracking method, system and server
Technical Field
The invention relates to the technical field of line tracking, in particular to a person tracking method, a person tracking system and a server.
Background
The image recognition technology is one of information recognition technologies, and can effectively recognize characteristic information such as human faces, wearing, vehicles, license plates and the like. Image recognition techniques have been applied to various fields of life, such as person trajectory retrieval. However, the traditional person track retrieval still remains manual video reverse playing, which wastes time and labor and has low timeliness. If the system is just used for criminal investigation or person searching, the application requirements can be met, but if the system is used for matching and tracking the tracks of people of different cameras in a multi-region, the application requirements cannot be met.
How to quickly and accurately identify the track of a person entering a specific area in the specific area is a problem to be solved.
Disclosure of Invention
In order to realize rapid and accurate track identification of people entering an area, embodiments of the present invention provide a person tracking method, a person tracking system, and a server. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a person tracking method, including:
creating a character database;
acquiring image information of a person to be detected according to the number of the target camera group; wherein, the image information is a front image or a back image;
extracting features to be detected from the image information according to a preset method, wherein the features to be detected comprise at least one of human body features, facial features and gait features;
acquiring data information corresponding to the characteristics to be detected in the character database;
when data information corresponding to the characteristics to be detected exists in the character database, acquiring monitoring image information acquired by other camera groups except the target camera group;
when data information corresponding to the characteristics to be detected does not exist in the character database, storing the characteristics to be detected into the character database, and acquiring monitoring image information acquired by other camera groups except the target camera group;
searching the character to be detected in the monitoring image information according to the characteristic to be detected, and taking a camera group corresponding to the monitoring image information of the character to be detected as a track camera group; and connecting the target camera group and each track camera group according to a time sequence to form the moving track of the person to be detected.
Optionally, creating the character database includes:
taking any one person in the crowd as a target person;
acquiring front video information and back video information of a target person;
extracting a first feature from the front-side video information; extracting a second feature from the back video information; wherein the first feature comprises: human body features, facial features, gait features; the second feature includes: human body characteristics, gait characteristics;
creating a person access library according to the first characteristic information and the second characteristic information;
taking each person in the crowd as the target person, creating a person access library corresponding to each person in the crowd, and storing each person access library; a character database is obtained.
Optionally, the target camera group includes:
the first camera is used for acquiring a front image of the person to be detected;
and the user acquires the back image of the person to be detected.
Optionally, the person database receives and stores video data shot by a plurality of camera groups distributed at different places in real time, each video data includes multi-frame image data, each frame of image data includes an image shot by a camera group, shooting time and a camera group number, and each camera group has a unique number.
Optionally, the preset method includes: and the method comprises at least one of an intersection comparison function target detection method, a tracking algorithm, a face embedding matching detection method and a gait recognition method.
Optionally, the obtaining data information corresponding to the feature to be detected from the person database includes:
when the features to be detected comprise human body features, acquiring data information corresponding to the features to be detected in the character database by using an intra-frame appearance detection-based method; wherein the human body characteristics comprise any one of clothes, clothes color, posture and bone proportion;
when the features to be detected comprise facial features, acquiring data information corresponding to the features to be detected in the character database by using a face recognition embedding method for tracking face detection;
and when the features to be detected comprise gait features, acquiring data information corresponding to the features to be detected in the character database by using a gait similarity matching method.
In a second aspect, an embodiment of the present invention provides a person tracking system, including:
the creation module is used for creating a character database;
the extraction module is used for acquiring the image information of the person to be detected according to the number of the target camera group; wherein, the image information is a front image or a back image;
the execution module is used for extracting features to be detected from the image information according to a preset method, wherein the features to be detected comprise at least one of human body features, facial features and gait features;
the judging module is used for acquiring data information corresponding to the characteristics to be detected in the character database;
when data information corresponding to the characteristics to be detected exists in the character database, acquiring monitoring image information acquired by other camera groups except the target camera group;
when data information corresponding to the characteristics to be detected does not exist in the character database, storing the characteristics to be detected into the character database, and acquiring monitoring image information acquired by other camera groups except the target camera group;
the marking module is used for searching the person to be detected in the monitoring image information according to the feature to be detected and taking a camera group corresponding to the monitoring image information of the person to be detected as a track camera group; and connecting the target camera group and each track camera group according to a time sequence to form the moving track of the person to be detected.
Optionally, the creating module includes:
the selection submodule is used for taking any one person in the crowd as a target person;
the acquisition submodule is used for acquiring front video information and back video information of a target person;
the extraction submodule is used for extracting a first feature from the front-side video information; extracting a second feature from the back video information; wherein the first feature comprises: human body features, facial features, gait features; the second feature includes: human body characteristics, gait characteristics;
the creating sub-module is used for creating a character access library according to the first characteristic information and the second characteristic information;
the circulation submodule is used for taking each person in the crowd as the target person, creating a person access library corresponding to each person in the crowd and storing each person access library; a character database is obtained.
In a third aspect, an embodiment of the present invention provides a server, including a processor, a communication interface, a memory, and a communication bus, where the processor and the communication interface complete communication between the memory and the processor through the communication bus;
a memory for storing a computer program;
a processor for implementing any of the method steps of the first aspect described above when executing a program stored on the memory.
According to the person tracking method, the person tracking system and the person tracking server provided by the embodiment of the invention, the camera groups in the area are connected with the person database to establish the area camera monitoring network, the positions of the persons to be detected, which are shot by the camera groups, in the database can be quickly inquired according to the characteristics to be detected of the persons to be detected, and the positions of the camera groups where the persons to be detected appear are connected by using time sequence, so that the moving track of the persons to be detected can be quickly obtained. On the other hand, in the embodiment of the invention, the camera group is used for respectively acquiring the front image and the back image of the person to be detected, and the characteristics of the person to be detected are matched with the person in the monitoring image information by using any one of an intersection-comparison function target detection method, a face recognition embedding method for tracking face detection and a gait similarity matching method, wherein the functions can be realized by any one of the three methods; therefore, the person tracking method provided by the embodiment of the invention increases the matching fault tolerance, solves the technical problem that the person to be detected cannot be accurately identified when the face information does not exist in the monitored image information, and improves the matching efficiency, so that the person tracking method provided by the embodiment of the invention can quickly and accurately mark the moving track of the person to be detected.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings. Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
Fig. 1 is a flowchart of a person tracking method according to an embodiment of the present invention;
FIG. 2 is a block diagram of a person tracking system according to an embodiment of the present invention;
fig. 3 is a module connection diagram of a server according to an embodiment of the present invention;
fig. 4 is an architecture diagram for creating a character database according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
In order to realize rapid and accurate track identification of people entering an area, embodiments of the present invention provide a person tracking method, a person tracking system, and a server.
Referring to fig. 1, an embodiment of the present invention provides a first aspect, and an embodiment of the present invention provides a person tracking method, including:
s110, creating a character database;
s120, acquiring image information of a person to be detected according to the number of the target camera group; wherein, the image information is a front image or a back image;
s130, extracting features to be detected from the image information according to a preset method, wherein the features to be detected comprise at least one of human body features, facial features and gait features;
s140, acquiring data information corresponding to the characteristics to be detected in the character database;
when data information corresponding to the characteristics to be detected exists in the character database, acquiring monitoring image information acquired by other camera groups except the target camera group;
when data information corresponding to the characteristics to be detected does not exist in the character database, storing the characteristics to be detected into the character database, and acquiring monitoring image information acquired by other camera groups except the target camera group;
s150, searching the character to be detected in the monitoring image information according to the characteristic to be detected, and taking a camera group corresponding to the monitoring image information of the character to be detected as a track camera group; and connecting the target camera group and each track camera group according to a time sequence to form the moving track of the person to be detected.
Further, referring to fig. 4, creating a character database includes:
taking any one person in the crowd as a target person;
acquiring front video information and back video information of a target person;
extracting a first feature from the front-side video information; extracting a second feature from the back video information; wherein the first feature comprises: human body features, facial features, gait features; the second feature includes: human body characteristics, gait characteristics;
creating a person access library according to the first characteristic information and the second characteristic information;
taking each person in the crowd as the target person, creating a person access library corresponding to each person in the crowd, and storing each person access library; a character database is obtained.
In fig. 4, reference numeral 1 is video data of camera No. 1 viewed from the front;
reference numeral 2 is that of the camera No. 1 viewed from the back; reference numeral 3 is an action scene; reference numeral 4 is a character feature extracted from the video stream No. 1; reference numeral 5 is a character feature extracted from the video stream No. 2; reference numeral 6 is generating a cross-camera access; reference numeral 7 is a character feature extracted from the video stream # 1; reference numeral 8 is a character feature extracted from the video stream No. 2; reference numeral 9 is a human feature matching; reference numeral 11 is a video stream No. 1; reference numeral 12 is a video stream No. 2; reference numeral 13 is a camera with a cross-sectional view; reference numeral 14 is a person with a time stamp t _1 in video stream No. 1; reference numeral 15 is a person with a time stamp t _2 in video stream No. 1; reference numeral 16 is a person with a time stamp t _3 in video stream No. 1; reference numeral 17 is the transition from the timestamp t _1 to t _2 in video stream No. 1; reference numeral 18 is the transition from timestamp t _2 to t _3 in video stream # 1; reference numeral 19 is a person with a time stamp t _1 in the video stream # 2; reference numeral 20 is a person with a time stamp t _2 in video stream No. 2; reference numeral 21 is a person with a time stamp t _3 in video stream No. 2; reference numeral 22 is the transition from timestamp t _1 to t _2 in video # 2; reference numeral 23 is the transition from timestamp t _2 to t _3 in video # 2; reference numeral 24 is a person at time t _ 1; reference numeral 25 is a person at time t _ 2; reference numeral 26 is a person at time t _ 3; reference numeral 27 is the distance from the camera No. 1 to the person when the time is t _ 1; reference numeral 28 is the distance from the camera No. 1 to the person at time t _ 2; reference numeral 29 is the distance from the camera No. 1 to the person at time t _ 3; reference numeral 30 is the height of the camera No. 1; reference numeral 31 is a vertical angle of the lower limit of the field of view of the camera No. 1.
Further, the target camera group includes:
the first camera is used for acquiring a front image of the person to be detected;
and the user acquires the back image of the person to be detected.
Furthermore, the person database receives and stores video data shot by a plurality of camera groups distributed at different places in real time, each video data comprises a plurality of frames of image data, each frame of image data comprises an image shot by the camera group, shooting time and a camera group number, and each camera group has a unique number.
Further, the preset method comprises the following steps: and the method comprises at least one of an intersection comparison function target detection method, a tracking algorithm, a face embedding matching detection method and a gait recognition method.
Specifically, the Intersection-over-Union IoU is based on Intersection matching result frame detection. This is in fact a very robust tracking method, and one concept used in target detection is the overlap ratio of the generated candidate box (candidate box) and the original marker box (groutthrobound), i.e. the ratio of their intersection to union. The optimal situation is complete overlap, i.e. a ratio of 1.
A tracking algorithm that uses deep body features to match existing trajectories and next frame detections. This includes pose (skeleton), the SOTA method performs depth feature extraction from the cropped human body.
The face embedding matching detection method works only when the face is large enough to extract facial features.
The gait recognition method is feasible theoretically, and uses gait recognition to recognize the characteristics to be detected of the person to be detected. Gait recognition is a new biological feature recognition technology, aims to identify the identity through the walking posture of people, and has the advantages of non-contact remote distance and difficulty in camouflage compared with other biological recognition technologies. In the field of intelligent video monitoring, the method has more advantages than image recognition. Gait refers to the way people walk, which is a complex behavioral characteristic. Criminals may put themselves into a position where they cannot leave even one hair on the scene, but have something they can hardly control, which is the walking posture.
In the embodiment of the invention, the shot characters to be detected are subjected to the feature extraction to be detected from multiple dimensions, so that the fault tolerance rate is increased, and the accuracy is improved;
further, the step of obtaining data information corresponding to the feature to be detected from the character database includes:
when the features to be detected comprise human body features, acquiring data information corresponding to the features to be detected in the character database by using an intra-frame appearance detection-based method; wherein the human body characteristics comprise any one of clothes, clothes color, posture and bone proportion;
in particular, based on intra-frame appearance detection. Just like in the tracking process, we can extract the deep features, which theoretically contain all necessary information. And manual extraction can also be carried out, such as extracting the color, the posture, the bone proportion and the like of clothes. A high quality identified person can be identified by the body detector. Here the human pixels need 100x100 px.
When the features to be detected comprise facial features, acquiring data information corresponding to the features to be detected in the character database by using a face recognition embedding method for tracking face detection; face recognition embedding using tracked face detection. If all the detections contain faces and are large enough to extract robust facial features on both tracks, we can use this to match the tracks. A person whose face size is detectable. 48x48px is well suited for face detection, the smaller the size, the lower the accuracy. Quantitative numbers require further investigation and if combined with information from subsequent frames, detection may be good enough even at a face size of 24x24 px. However, this can be difficult in terms of system throughput, since the speed of detection depends on the minimum face detection size. A person whose face size is recognizable. The ideal face size is 100x100 px. With the decrease of the minimum size, the performance is also decreased, and the precise dependence is still to be studied. The face recognition performance of 80x80px is still high. However, it is considered that the motion blur degree has a great influence on the quality of face recognition.
And when the features to be detected comprise gait features, acquiring data information corresponding to the features to be detected in the character database by using a gait similarity matching method. Matching based on gait similarity. It is feasible to construct and mathematically calculate a gait feature for each trajectory.
According to the person tracking method, the person tracking system and the person tracking server provided by the embodiment of the invention, the camera groups in the area are connected with the person database to establish the area camera monitoring network, the positions of the persons to be detected, which are shot by the camera groups, in the database can be quickly inquired according to the characteristics to be detected of the persons to be detected, and the positions of the camera groups where the persons to be detected appear are connected by using time sequence, so that the moving track of the persons to be detected can be quickly obtained. On the other hand, in the embodiment of the invention, the camera group is used for respectively acquiring the front image and the back image of the person to be detected, and the characteristics of the person to be detected are matched with the person in the monitoring image information by using any one of an intersection-comparison function target detection method, a face recognition embedding method for tracking face detection and a gait similarity matching method, wherein the functions can be realized by any one of the three methods; therefore, the person tracking method provided by the embodiment of the invention increases the matching fault tolerance, solves the technical problem that the person to be detected cannot be accurately identified when the face information does not exist in the monitored image information, and improves the matching efficiency, so that the person tracking method provided by the embodiment of the invention can quickly and accurately mark the moving track of the person to be detected.
In a second aspect, referring to fig. 2, an embodiment of the invention provides a person tracking system, including:
the creation module is used for creating a character database;
the extraction module is used for acquiring the image information of the person to be detected according to the number of the target camera group; wherein, the image information is a front image or a back image;
the execution module is used for extracting features to be detected from the image information according to a preset method, wherein the features to be detected comprise at least one of human body features, facial features and gait features;
the judging module is used for acquiring data information corresponding to the characteristics to be detected in the character database;
when data information corresponding to the characteristics to be detected exists in the character database, acquiring monitoring image information acquired by other camera groups except the target camera group;
when data information corresponding to the characteristics to be detected does not exist in the character database, storing the characteristics to be detected into the character database, and acquiring monitoring image information acquired by other camera groups except the target camera group;
the marking module is used for searching the person to be detected in the monitoring image information according to the feature to be detected and taking a camera group corresponding to the monitoring image information of the person to be detected as a track camera group; and connecting the target camera group and each track camera group according to a time sequence to form the moving track of the person to be detected.
Further, the creating module includes:
the selection submodule is used for taking any one person in the crowd as a target person;
the acquisition submodule is used for acquiring front video information and back video information of a target person;
the extraction submodule is used for extracting a first feature from the front-side video information; extracting a second feature from the back video information; wherein the first feature comprises: human body features, facial features, gait features; the second feature includes: human body characteristics, gait characteristics;
the creating sub-module is used for creating a character access library according to the first characteristic information and the second characteristic information;
the circulation submodule is used for taking each person in the crowd as the target person, creating a person access library corresponding to each person in the crowd and storing each person access library; a character database is obtained.
In a third aspect, referring to fig. 3, an embodiment of the present invention provides a server, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
the processor is used for realizing the following method steps when executing the program stored in the memory:
s110, creating a character database;
s120, acquiring image information of a person to be detected according to the number of the target camera group; wherein, the image information is a front image or a back image;
s130, extracting features to be detected from the image information according to a preset method, wherein the features to be detected comprise at least one of human body features, facial features and gait features;
s140, acquiring data information corresponding to the characteristics to be detected in the character database;
when data information corresponding to the characteristics to be detected exists in the character database, acquiring monitoring image information acquired by other camera groups except the target camera group;
when data information corresponding to the characteristics to be detected does not exist in the character database, storing the characteristics to be detected into the character database, and acquiring monitoring image information acquired by other camera groups except the target camera group;
s150, searching the character to be detected in the monitoring image information according to the characteristic to be detected, and taking a camera group corresponding to the monitoring image information of the character to be detected as a track camera group; and connecting the target camera group and each track camera group according to a time sequence to form the moving track of the person to be detected.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM), or may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The processor may be a general-purpose processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the integrated circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component.
In another embodiment provided by the present invention, a computer-readable storage medium is further provided, which stores instructions that, when executed on a computer, cause the computer to perform any one of steps S110 to S170 in the above embodiments.
In a further embodiment of the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of steps S110-S170 as described in any of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (9)

1. A person tracking method, comprising:
creating a character database;
acquiring image information of a person to be detected according to the number of the target camera group; wherein, the image information is a front image or a back image;
extracting features to be detected from the image information according to a preset method, wherein the features to be detected comprise at least one of human body features, facial features and gait features;
acquiring data information corresponding to the characteristics to be detected in the character database;
when data information corresponding to the characteristics to be detected exists in the character database, acquiring monitoring image information acquired by other camera groups except the target camera group;
when data information corresponding to the characteristics to be detected does not exist in the character database, storing the characteristics to be detected into the character database, and acquiring monitoring image information acquired by other camera groups except the target camera group;
searching the character to be detected in the monitoring image information according to the characteristic to be detected, and taking a camera group corresponding to the monitoring image information of the character to be detected as a track camera group; and connecting the target camera group and each track camera group according to a time sequence to form the moving track of the person to be detected.
2. The person tracking method according to claim 1, wherein creating the person database includes:
taking any one person in the crowd as a target person;
acquiring front video information and back video information of a target person;
extracting a first feature from the front-side video information; extracting a second feature from the back video information; wherein the first feature comprises: human body features, facial features, gait features; the second feature includes: human body characteristics, gait characteristics;
creating a person access library according to the first characteristic information and the second characteristic information;
taking each person in the crowd as the target person, creating a person access library corresponding to each person in the crowd, and storing each person access library; a character database is obtained.
3. The person tracking method according to claim 1, wherein the target camera group includes:
the first camera is used for acquiring a front image of the person to be detected;
and the user acquires the back image of the person to be detected.
4. The person tracking method according to claim 1, wherein the person database receives and stores video data captured by a plurality of camera groups distributed at different locations in real time, each video data comprising a plurality of frames of image data, each frame of image data comprising an image captured by a camera group, a capturing time, and a camera group number, wherein each camera group has a unique number.
5. The person tracking method according to claim 1, wherein the preset method is: and the method comprises at least one of an intersection comparison function target detection method, a tracking algorithm, a face embedding matching detection method and a gait recognition method.
6. The person tracking method according to claim 1, wherein acquiring data information corresponding to the feature to be detected in the person database includes:
when the features to be detected comprise human body features, acquiring data information corresponding to the features to be detected in the character database by using an intra-frame appearance detection-based method; wherein the human body characteristics comprise any one of clothes, clothes color, posture and bone proportion;
when the features to be detected comprise facial features, acquiring data information corresponding to the features to be detected in the character database by using a face recognition embedding method for tracking face detection;
and when the features to be detected comprise gait features, acquiring data information corresponding to the features to be detected in the character database by using a gait similarity matching method.
7. A person tracking system, comprising:
the creation module is used for creating a character database;
the extraction module is used for acquiring the image information of the person to be detected according to the number of the target camera group; wherein, the image information is a front image or a back image;
the execution module is used for extracting features to be detected from the image information according to a preset method, wherein the features to be detected comprise at least one of human body features, facial features and gait features;
the judging module is used for acquiring data information corresponding to the characteristics to be detected in the character database;
when data information corresponding to the characteristics to be detected exists in the character database, acquiring monitoring image information acquired by other camera groups except the target camera group;
when data information corresponding to the characteristics to be detected does not exist in the character database, storing the characteristics to be detected into the character database, and acquiring monitoring image information acquired by other camera groups except the target camera group;
the marking module is used for searching the person to be detected in the monitoring image information according to the feature to be detected and taking a camera group corresponding to the monitoring image information of the person to be detected as a track camera group; and connecting the target camera group and each track camera group according to a time sequence to form the moving track of the person to be detected.
8. The person tracking system of claim 7, wherein the creation module comprises:
the selection submodule is used for taking any one person in the crowd as a target person;
the acquisition submodule is used for acquiring front video information and back video information of a target person;
the extraction submodule is used for extracting a first feature from the front-side video information; extracting a second feature from the back video information; wherein the first feature comprises: human body features, facial features, gait features; the second feature includes: human body characteristics, gait characteristics;
the creating sub-module is used for creating a character access library according to the first characteristic information and the second characteristic information;
the circulation submodule is used for taking each person in the crowd as the target person, creating a person access library corresponding to each person in the crowd and storing each person access library; a character database is obtained.
9. A server is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1-6 when executing a program stored in the memory.
CN202010066174.3A 2020-01-20 2020-01-20 Figure tracking method, system and server Pending CN111242077A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010066174.3A CN111242077A (en) 2020-01-20 2020-01-20 Figure tracking method, system and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010066174.3A CN111242077A (en) 2020-01-20 2020-01-20 Figure tracking method, system and server

Publications (1)

Publication Number Publication Date
CN111242077A true CN111242077A (en) 2020-06-05

Family

ID=70872870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010066174.3A Pending CN111242077A (en) 2020-01-20 2020-01-20 Figure tracking method, system and server

Country Status (1)

Country Link
CN (1) CN111242077A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111898592A (en) * 2020-09-29 2020-11-06 腾讯科技(深圳)有限公司 Track data processing method and device and computer readable storage medium
CN113221800A (en) * 2021-05-24 2021-08-06 珠海大横琴科技发展有限公司 Monitoring and judging method and system for target to be detected
CN113378621A (en) * 2021-04-06 2021-09-10 张沈莘 Track traffic informatization front-end monitoring system, monitoring method and storage medium
CN113537107A (en) * 2021-07-23 2021-10-22 山东浪潮通软信息科技有限公司 Face recognition and tracking method, device and equipment based on deep learning
CN113747115A (en) * 2021-06-25 2021-12-03 深圳市威尔电器有限公司 Method, system, device and storage medium for monitoring video of eye-to-eye network
TWI778761B (en) * 2020-08-28 2022-09-21 大陸商北京市商湯科技開發有限公司 Methods, apparatuses for determining activity areas of target objects, devices and storage media

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140056490A1 (en) * 2012-08-24 2014-02-27 Kabushiki Kaisha Toshiba Image recognition apparatus, an image recognition method, and a non-transitory computer readable medium thereof
CN105069408A (en) * 2015-07-24 2015-11-18 上海依图网络科技有限公司 Video portrait tracking method based on human face identification in complex scenario
CN110309716A (en) * 2019-05-22 2019-10-08 深圳壹账通智能科技有限公司 Service tracks method, apparatus, equipment and storage medium based on face and posture
CN110532923A (en) * 2019-08-21 2019-12-03 深圳供电局有限公司 Figure track retrieval method and system
CN110532432A (en) * 2019-08-21 2019-12-03 深圳供电局有限公司 Character track retrieval method and system and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140056490A1 (en) * 2012-08-24 2014-02-27 Kabushiki Kaisha Toshiba Image recognition apparatus, an image recognition method, and a non-transitory computer readable medium thereof
CN105069408A (en) * 2015-07-24 2015-11-18 上海依图网络科技有限公司 Video portrait tracking method based on human face identification in complex scenario
CN110309716A (en) * 2019-05-22 2019-10-08 深圳壹账通智能科技有限公司 Service tracks method, apparatus, equipment and storage medium based on face and posture
CN110532923A (en) * 2019-08-21 2019-12-03 深圳供电局有限公司 Figure track retrieval method and system
CN110532432A (en) * 2019-08-21 2019-12-03 深圳供电局有限公司 Character track retrieval method and system and computer readable storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI778761B (en) * 2020-08-28 2022-09-21 大陸商北京市商湯科技開發有限公司 Methods, apparatuses for determining activity areas of target objects, devices and storage media
CN111898592A (en) * 2020-09-29 2020-11-06 腾讯科技(深圳)有限公司 Track data processing method and device and computer readable storage medium
CN111898592B (en) * 2020-09-29 2020-12-29 腾讯科技(深圳)有限公司 Track data processing method and device and computer readable storage medium
CN113378621A (en) * 2021-04-06 2021-09-10 张沈莘 Track traffic informatization front-end monitoring system, monitoring method and storage medium
CN113378621B (en) * 2021-04-06 2024-03-22 张沈莘 Rail transit informatization front-end monitoring system, monitoring method and storage medium
CN113221800A (en) * 2021-05-24 2021-08-06 珠海大横琴科技发展有限公司 Monitoring and judging method and system for target to be detected
CN113747115A (en) * 2021-06-25 2021-12-03 深圳市威尔电器有限公司 Method, system, device and storage medium for monitoring video of eye-to-eye network
CN113537107A (en) * 2021-07-23 2021-10-22 山东浪潮通软信息科技有限公司 Face recognition and tracking method, device and equipment based on deep learning

Similar Documents

Publication Publication Date Title
Zhuo et al. Occluded person re-identification
CN111242077A (en) Figure tracking method, system and server
KR101972918B1 (en) Apparatus and method for masking a video
KR102167730B1 (en) Apparatus and method for masking a video
Wang et al. P2snet: Can an image match a video for person re-identification in an end-to-end way?
US20060093185A1 (en) Moving object recognition apparatus
CN105631430A (en) Matching method and apparatus for face image
Lin et al. Visual-attention-based background modeling for detecting infrequently moving objects
CN111145223A (en) Multi-camera personnel behavior track identification analysis method
Singh et al. A comprehensive survey on person re-identification approaches: various aspects
CN113537107A (en) Face recognition and tracking method, device and equipment based on deep learning
Yadav et al. Human Illegal Activity Recognition Based on Deep Learning Techniques
Fookes et al. Semi-supervised intelligent surveillance system for secure environments
KR101826669B1 (en) System and method for video searching
Jaiswal et al. Survey paper on various techniques of recognition and tracking
De Marsico et al. ES-RU: an e ntropy based rule to s elect r epresentative templates in face su rveillance
Taha et al. Exploring behavior analysis in video surveillance applications
Mir et al. Criminal action recognition using spatiotemporal human motion acceleration descriptor
Ladjailia et al. Encoding human motion for automated activity recognition in surveillance applications
Shen et al. An Interactively Motion-Assisted Network for Multiple Object Tracking in Complex Traffic Scenes
Senior An introduction to automatic video surveillance
Balasubramanian et al. Forensic video solution using facial feature‐based synoptic Video Footage Record
Xu et al. Smart video surveillance system
Hemaanand et al. Smart surveillance system using computer vision and Internet of Things
Kurchaniya et al. Two stream deep neural network based framework to detect abnormal human activities

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200605