CN110874878B - Pedestrian analysis method, device, terminal and storage medium - Google Patents

Pedestrian analysis method, device, terminal and storage medium Download PDF

Info

Publication number
CN110874878B
CN110874878B CN201810904924.2A CN201810904924A CN110874878B CN 110874878 B CN110874878 B CN 110874878B CN 201810904924 A CN201810904924 A CN 201810904924A CN 110874878 B CN110874878 B CN 110874878B
Authority
CN
China
Prior art keywords
person
face
preset
registered
unregistered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810904924.2A
Other languages
Chinese (zh)
Other versions
CN110874878A (en
Inventor
彭齐荣
赵猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201810904924.2A priority Critical patent/CN110874878B/en
Publication of CN110874878A publication Critical patent/CN110874878A/en
Application granted granted Critical
Publication of CN110874878B publication Critical patent/CN110874878B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • G06V40/173Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof

Abstract

A pedestrian analysis method, comprising: acquiring face data in a preset time period in a preset face library; identifying whether the face image in a preset face library is the face image of a registered person or not; when the face images are determined to be yes, extracting first acquisition time corresponding to all face images of each of the registered persons; when the face images are determined to be not registered persons but not registered persons, extracting second acquisition time corresponding to all face images of each unregistered person in the unregistered persons; and when the time difference value between the two persons is less than or equal to the preset difference threshold value, determining the unregistered person corresponding to the face image acquired at the second acquisition time and the registered person corresponding to the face image acquired at the first acquisition time as the same person. The invention also provides a pedestrian analysis device, a terminal and a storage medium. The invention can automatically analyze the pedestrian, and associates the unregistered person who has walked with the registered person, thereby facilitating the management.

Description

Pedestrian analysis method, device, terminal and storage medium
Technical Field
The invention relates to the technical field of regional management, in particular to a pedestrian analysis method, a pedestrian analysis device, a pedestrian analysis terminal and a storage medium.
Background
In the existing analysis of pedestrians in a certain area, face pictures in a past preset time period are generally acquired every night, faces of registered people are filtered out, faces of the rest people are clustered, and people who are not registered are analyzed.
If a community has 22 cameras, the 22 cameras can collect 100 million face pictures every 30 days, and 500 new unregistered persons appear every month, then the pedestrian analysis method according to the prior art has the following disadvantages: the method has the advantages that 100 ten thousand faces are required to be subjected to cluster analysis every day, and 500 unregistered persons are required to be subjected to peer analysis every day, so that the execution efficiency is very low, and the number of communities supported by a single server is greatly limited; millions of records need to be stored every time the intelligent community analysis is performed, and the storage of a database is also a great challenge; the unregistered personnel list analyzed every day is different, even if the unregistered personnel list is the same, the unregistered personnel list can also be analyzed to have different heads, and management of managers in the intelligent park is not facilitated.
Disclosure of Invention
In view of the above, it is desirable to provide a pedestrian analysis method, apparatus, terminal and storage medium, which can perform peer analysis on pedestrians in an intelligent campus, identify peers of registered people, and automatically give authority to peers of registered people.
The first aspect of the present invention provides a pedestrian analysis method, applied to a terminal, the method including:
the method comprises the following steps of obtaining face data in a preset time period in a preset face library, wherein the face data comprise: face images and corresponding acquisition time;
when the face images in the preset face library are determined to be the face images of the registered persons, extracting first acquisition time corresponding to all the face images of each registered person in the registered persons;
when the face images in the preset face library are determined not to be the face images of the registered persons but to be the face images of the unregistered persons, extracting second acquisition time corresponding to all the face images of each unregistered person in the unregistered persons;
calculating difference values of all first acquisition time values of each registered person and all second acquisition time values of each unregistered person to obtain all first time difference values;
and determining the unregistered person corresponding to the second acquisition time related to the first time difference value smaller than or equal to the preset first difference threshold value in all the first time difference values and the registered person corresponding to the related first acquisition time as the same person.
Preferably, after determining, as a person in the same row, the unregistered person corresponding to the second acquisition time associated with the first time difference value smaller than or equal to the preset first difference threshold among all the first time difference values and the registered person corresponding to the associated first acquisition time, the method further includes:
storing the face image of the unregistered person corresponding to the person in the same line and the face image of the registered person in a correlation manner;
and setting a first authority for the unregistered person.
Preferably, the method further comprises:
determining the face images which are not the registered person and not the unregistered person in the preset face library as the face images of the persons which cannot be identified, and printing preset marks on the face images of the persons which cannot be identified.
Preferably, the method further comprises:
extracting third acquisition time corresponding to all face images of each person incapable of being identified in the persons incapable of being identified;
calculating difference values of all first acquisition times of all the registered persons and all third acquisition times of all the persons which cannot be identified to obtain all second time difference values;
and determining the unidentifiable person corresponding to the third acquisition time related to the second time difference value smaller than or equal to the preset second difference value threshold value in all the second time difference values and the registered person corresponding to the related first acquisition time as the same-row person.
Preferably, after the unidentified person and the registered person are determined to be the same person, the method further comprises:
associating the face image of the unidentifiable person determined as the person in the same row with the face image of the registered person in the same row and storing the face image in a preset unregistered library;
and setting a second authority for the unidentified personnel.
Preferably, the second right includes: and when receiving a preset answer to a preset problem from the unidentified person, controlling the entrance guard barrier gate to be opened.
Preferably, the face image of the unregistered person is obtained by:
acquiring all face images in a previous preset time period in the preset face library for the first time;
recognizing the face images of the registered persons in all the face images;
and determining the face images except the face image of the registered person in all the face images as the face images of the unregistered person and storing the face images in the second registration library.
A second aspect of the present invention provides a pedestrian analysis apparatus operating in a terminal, the apparatus comprising:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring face data in a preset time period in a preset face library, and the face data comprises: face images and corresponding acquisition time;
the extraction module is used for extracting first acquisition time corresponding to all face images of each registered person in the registered persons when the recognition module determines that the face images in the preset face library are the face images of the registered persons;
the extraction module is further configured to extract a second acquisition time corresponding to all face images of each unregistered person in the unregistered persons when the recognition module determines that the face images in the preset face library are not the face images of the registered persons but the face images of the unregistered persons;
the calculation module is used for calculating difference values of all first acquisition time of each registered person and all second acquisition time of each unregistered person to obtain all first time difference values; the determining module is used for determining the unregistered person corresponding to the second acquisition time related to the first time difference value smaller than or equal to the preset first difference value threshold value in all the first time difference values and the registered person corresponding to the related first acquisition time as the same person.
A third aspect of the invention provides a terminal comprising a processor for implementing a pedestrian analysis method when executing a computer program stored in a memory.
A fourth aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a pedestrian analysis method.
The method comprises the steps of acquiring face images in a preset time period in a preset face library and corresponding acquisition time; then identifying whether the face image is the face image of the registered person; when the face images of the registered persons are determined, extracting first acquisition time corresponding to all the face images of each of the registered persons; when the face images of the people who are not registered but are determined to be the face images of the people who are not registered are determined, extracting second acquisition time corresponding to all the face images of each person who is not registered in the people who are not registered; calculating difference values of all first acquisition time values of each registered person and all second acquisition time values of each unregistered person to obtain all first time difference values; when the first time difference value smaller than the preset first difference value threshold value is determined to exist in all the first time difference values, the unregistered person corresponding to the second acquisition time and the registered person corresponding to the related first acquisition time are automatically determined to be the same person, and management is facilitated. And only the face data in the preset time period needs to be acquired and analyzed, so that the number of face analysis is greatly reduced, and the analyzed head portrait of the unregistered person cannot be changed along with the lapse of time. In addition, as time goes by, the face images of the unregistered persons in the preset unregistered library are increased, the face images of persons which cannot be identified may be reduced, and the unregistered persons have the first authority, so that the difficulty of management by the administrator is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a pedestrian analysis method according to an embodiment of the present invention.
Fig. 2 is a structural diagram of a pedestrian analysis apparatus according to a second embodiment of the present invention.
Fig. 3 is a schematic diagram of a terminal according to a third embodiment of the present invention.
The following detailed description will further illustrate the invention in conjunction with the above-described figures.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a detailed description of the present invention will be given below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments of the present invention and features of the embodiments may be combined with each other without conflict.
Preferably, the pedestrian analysis method of the present invention is applied to one or more terminals or servers. The terminal is a device capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and the hardware thereof includes but is not limited to a microprocessor, an application specific integrated circuit, a programmable gate array, an embedded device, and the like.
The terminal can be a desktop computer, a notebook, a palm computer, a cloud server and other computing equipment. The terminal can be in man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
Example one
Fig. 1 is a flowchart of a pedestrian analysis method according to an embodiment of the present invention. The pedestrian analysis method is applied to a terminal. The execution sequence in the flowchart shown in fig. 1 may be changed and some steps may be omitted according to different requirements.
In this embodiment, the pedestrian analysis method may be applied to an intelligent terminal with a photographing or shooting function, and the terminal is not limited to a personal computer, a smart phone, a tablet computer, a desktop or all-in-one machine with a camera, and the like.
The campus staff analyzing method can also be applied to a hardware environment consisting of a terminal and a server connected to the terminal through a network. Networks include, but are not limited to: a wide area network, a metropolitan area network, or a local area network. The pedestrian analysis method in the embodiment of the invention can be executed by the server, the terminal or both.
For example, for a terminal that needs to perform pedestrian analysis, the pedestrian analysis function provided by the method of the present invention may be directly integrated on the terminal, or a client for implementing the method of the present invention may be installed. For another example, the method provided by the present invention may further run on a device such as a server in the form of a Software Development Kit (SDK), and an interface with a pedestrian analysis function is provided in the form of an SDK, and a terminal or other devices may analyze the fellow staff in the intelligent park through the provided interface.
As shown in fig. 1, the pedestrian analysis method specifically includes the following steps:
step 101: and acquiring the face data in a preset time period in a preset face library.
The preset time period is a preset time period, and a time period before the current time point may be preset, for example, 24 hours before the current time point, or a week before the current time point. The preset time period may also be a time interval, for example, from 00:00 to 24:00 of the previous day. The invention can automatically set and modify the preset time period according to the purpose of actual analysis.
The preset face library is a preset database and is specially used for storing face data of pedestrians passing through the entrance guard barrier. The face data may include: the method comprises the steps of face images, corresponding acquisition time and corresponding image acquisition equipment labels. And sequentially storing the face images in the preset face library according to the acquisition time.
In this embodiment, can be provided with many places entrance guard's banister in the wisdom garden, an image acquisition equipment can all be installed in the entrance and the exit of every entrance guard's banister for the collection passes through the pedestrian's of entrance guard's banister the face image and pass through the pedestrian's of entrance guard's banister the face image.
In this embodiment, can correspond for each entrance guard banister and set up a face storehouse, the face image who installs the image acquisition equipment collection in the entrance and the exit of same entrance guard banister can be saved in the face storehouse of presetting that corresponds jointly. And a face library can be correspondingly arranged for each image acquisition device, and the face images acquired by the same image acquisition device can be stored in the corresponding preset face library. Only two face libraries may also be preset: the method comprises the steps of presetting a first face library and a second face library, namely storing face images collected by image collection equipment at entrance guard barrier gate inlets in the first face library, and storing face images collected by image collection equipment at exit guard barrier gate outlets in the second face library.
Step 102: and identifying whether the face image in the preset face library is the face image of the registered person.
In this embodiment, a first registration library may be preset, and is dedicated to recording information of registered persons. The information of the registered person may include: the name, contact information (including mobile phone number and mail box), home address and face image of the registered person.
In this embodiment, the first registry may be stored in the local database in advance. In other embodiments, to save the storage space of the terminal, the first registry may be pre-stored in a server, and the server and the terminal are in communication connection in a wired or wireless manner.
In this embodiment, the identifying whether the face image in the preset face library is a face image of a registered person specifically includes:
1) and detecting the face area of the face image by using a preset face detection algorithm.
In this embodiment, the preset face detection method is a preset face detection algorithm, for example, a feature-based method, a clustering-based method, an artificial neural network-based method, or a support vector machine-based method. The preset face detection algorithm is the existing face detection algorithm, and is not described in detail herein.
2) And calculating the similarity between the face region of the face image in the preset face library and the face region of the registered person.
In this embodiment, the similarity between the face region of the face image in the preset face library and the face region of the registered person may be calculated by a template matching method.
3) And judging whether the similarity is greater than a preset similarity threshold value.
The preset similarity threshold is a preset value, and may be, for example, 99%.
When the similarity is greater than or equal to the preset similarity threshold, determining the face image in the preset face library as the face image of the registered person; and when the similarity is smaller than the preset similarity threshold, determining that the face image in the preset face library is not the face image of the registered person.
In this embodiment, each face image in the preset face library needs to be matched with the face image of each registered person in the first registration library one by one. And if the similarity between a certain face image in the preset face library and the face image of any one registered person in the first registration library is greater than or equal to the similarity threshold, considering the face image in the preset face library as the face image of the registered person in the preset first registration library. And if the similarity between a certain face image in the preset face library and the face image of any registered person in the first registration library is smaller than the similarity threshold, the face image in the preset face library is not considered to be the face image of the registered person in the preset first registration library.
When the face image in the preset face library is determined to be the face image of the registered person, executing step 103; otherwise, when it is determined that the face image in the preset face library is not the face image of the registered person, step 104 is executed.
Step 103: and extracting first acquisition time corresponding to all the face images of each of the registered persons.
There may be a plurality of persons determined as registered persons in the preset face library, and each person determined as a registered person may have a plurality of face images, so it is necessary to extract first acquisition times of all face images of all persons determined as each registered person in the preset face library, and sort the extracted first acquisition times corresponding to all face images belonging to the same registered person.
For example, assume that there are face images F1, F2, F3, F4, F5, F6, and F7 in the preset face library, where F1 and F2 belong to a face image of a first person, F3, F6, and F7 belong to a face image of a second person, F4 belongs to a face of a third person, F5 belongs to a face image of a fourth person, and the first person and the fourth person are determined to be registered persons, and the second person and the third person are determined to be unregistered persons, then acquisition times T1, T2, and T5 corresponding to F1, F2, and F5 are referred to as first acquisition times, and acquisition times T3, T4, T6, and T7 corresponding to F3, F4, F6, and F7 are referred to second acquisition times.
Step 104: and identifying whether the face image in the preset face library is the face image of the unregistered person.
In this embodiment, a second registry may be preset, and dedicated to record information of unregistered persons. The information of the unregistered person may include: a face image of the unregistered person, an acquisition time of the unregistered person, and a number of an image acquisition apparatus that acquires the face image of the unregistered person.
In this embodiment, the face image of the unregistered person in the second registration library may be obtained through the following steps:
1) acquiring all face images in a previous preset time period (for example, 30 days) in the preset face library for the first time;
2) recognizing the face images of the registered persons in all the face images;
3) and determining the face images except the face image of the registered person in all the face images as the face images of the unregistered person and storing the face images in the second registration library.
In this embodiment, the process of identifying whether the face image in the preset face library is a face image of an unregistered person is similar to the process of identifying whether the face image in the preset face library is a face image of a registered person, and a description thereof is omitted here.
When the face image in the preset face library is determined to be the face image of the unregistered person, executing step 105; otherwise, when it is determined that the face image in the preset face library is not the face image of the unregistered person, step 106 is executed.
Step 105: and extracting second acquisition time corresponding to all the face images of each unregistered person in the unregistered persons.
There may be a plurality of persons determined as unregistered persons in the preset face library, and each person determined as an unregistered person may have a plurality of face images, so that it is necessary to extract second acquisition times of all face images of all persons determined as each unregistered person in the preset face library, and sort the extracted second acquisition times corresponding to all face images belonging to the same unregistered person.
106: and determining the face image in the preset face library as a face image of the person which cannot be identified, and marking a preset identifier on the face image of the person which cannot be identified.
The preset identification is used for identifying the face images of the unrecognizable persons in the preset face library, namely identifying the persons corresponding to the face images in the preset face library as the unrecognizable persons, and the unrecognizable persons can be understood as new unregistered persons. The preset identifier may be one or more of the following combinations: graphical identifiers, e.g., question marks; textual identifiers, e.g., "unrecognizable"; color designation, e.g., gray.
Step 107: and calculating the difference value of all the first acquisition time of each registered person and all the second acquisition time of each unregistered person to obtain all the first time difference values.
The time difference is calculated for the first acquisition time corresponding to all face images belonging to each registered person (referred to as all first acquisition time belonging to the same registered person for short) and the second acquisition time corresponding to all face images belonging to each unregistered person (referred to as all second acquisition time belonging to the same unregistered person for short) in the preset face library. The calculation of the first time difference value needs to be performed for all first acquisition times for all registered persons and all second acquisition times for all unregistered persons.
For example, assume that there are registered persons a1, a2, A3 and unregistered persons B1, B2 in the preset face library. The registered person A1 has 3 face images, and the corresponding first acquisition time is T11, T12 and T13; the registered person a2 has 1 face image and the corresponding first acquisition time is T14, and the registered person A3 has 2 face images and the corresponding first acquisition times are T15 and T16. The unregistered person B1 has 3 face pictures, and the corresponding second acquisition time is T21, T22 and T23; the unregistered person B2 has 2 face images, and the corresponding second acquisition times are T24 and T25. The first collecting time T11 of the registered person a1 needs to be calculated with all the second collecting times T21, T22, T23 of the unregistered person B1, and with all the second collecting times T24, T25 of the unregistered person B2, respectively; calculating the first collecting time T12 of the registered person A1 with all the second collecting times T21, T22 and T23 of the unregistered person B1 and with all the second collecting times T24 and T25 of the unregistered person B2 respectively; the first acquisition time T13 of the registered person a1 is calculated with all the second acquisition times T21, T22, T23 of the unregistered person B1, and with all the second acquisition times T24, T25 of the unregistered person B2, respectively. Then, the first collecting time T14 of the registered person a2 is calculated with the second collecting times T21, T22, T23 of the unregistered person, respectively, and with all the second collecting times T24, T25 of the unregistered person B2; and so on; all first time difference values between all first acquisition times of all the registered persons and all second acquisition times of all the unregistered persons are obtained. And whether the unregistered person is the peer of the registered person or not is determined conveniently according to the time difference value.
In this embodiment, calculating the first time difference between one first acquisition time and one second acquisition time may specifically include:
1) judging whether the first acquisition time is longer than the second acquisition time;
2) when the first acquisition time is determined to be greater than or equal to the second acquisition time, subtracting the second acquisition time from the first acquisition time to obtain a first time difference value;
3) and when the first acquisition time is determined to be smaller than the second acquisition time, subtracting the first acquisition time from the second acquisition time to obtain the first time difference value.
It will be appreciated that each first time difference value needs to be calculated for each first acquisition time and each second acquisition time. And when the difference value between all the first acquisition time of all the registered persons and all the second acquisition time of all the unregistered persons is calculated, all the first time difference values can be obtained.
Step 108: and judging whether a first time difference value smaller than a preset first difference value threshold exists in all the first time difference values.
In this embodiment, the preset first difference threshold is a preset first value, and may be, for example, 2 seconds.
When it is determined that there is a first time difference value smaller than or equal to the preset first difference threshold value in all the first time difference values, that is, it is determined that there is a first time difference value between the first acquisition time and the second acquisition time smaller than or equal to the preset first difference threshold value, step 109 is executed; when it is determined that there is no first time difference value smaller than or equal to the preset first difference threshold among all the first time difference values, the process may be directly ended.
Step 109: and determining the unregistered person corresponding to the second acquisition time related to the first time difference value smaller than or equal to the preset first difference threshold value in all the first time difference values and the registered person corresponding to the related first acquisition time as the same person.
In this embodiment, since there is a first time difference value smaller than or equal to the preset first difference threshold among all the first time difference values, i.e. a first time difference between the first acquisition time and the second acquisition time is less than or equal to said first difference threshold, the time interval between the corresponding first acquisition time and the corresponding second acquisition time for a first time difference value related to less than or equal to the first difference threshold may be considered shorter, it can be considered that the unregistered person corresponding to the face image acquired at the corresponding second acquisition time passes through the image acquisition apparatus at the same time as the registered person corresponding to the face image acquired at the corresponding first acquisition time, and the unregistered person corresponding to the face image acquired at the corresponding second acquisition time and the registered person corresponding to the face image acquired at the corresponding first acquisition time move forwards at the same time.
Further, after determining, as a person in the same bank, the unregistered person corresponding to the second acquisition time associated with the first time difference value smaller than or equal to the preset first difference threshold among all the first time difference values and the registered person corresponding to the associated first acquisition time, the method may further include: storing the face image of the unregistered person corresponding to the person in the same line and the face image of the registered person in a correlation manner; and setting a first authority for the unregistered person. The first right may include: allowing direct passage through the access barrier.
In this embodiment, when it is determined that the unregistered person is a fellow person of the registered person, the unregistered person is associated with the fellow person, and the first authority is set for the unregistered person, so that the unregistered person can be conveniently identified again in the following, and the passing time of the unregistered person can be improved.
Further, after the face image that cannot be recognized is marked with a preset identifier, the method may further include: extracting third acquisition time corresponding to all face images of each person incapable of being identified in the persons incapable of being identified; calculating difference values of all first acquisition times of all the registered persons and all third acquisition times of all the persons which cannot be identified to obtain all second time difference values; judging whether a second time difference value smaller than a preset second difference value threshold exists in all the second time difference values; when it is determined that a second time difference value smaller than or equal to the preset second difference value threshold exists in all the second time difference values, the unidentifiable person corresponding to the third acquisition time related to the second time difference value smaller than or equal to the preset second difference value threshold in all the second time difference values and the registered person corresponding to the related first acquisition time are determined as the same person.
In this embodiment, the difference between all the first collecting times of all the registered persons and all the third collecting times of all the persons unable to be identified is calculated to obtain all the second time differences, and the process of obtaining all the first time differences is the same as the process of calculating the difference between all the first collecting times of each of the registered persons and all the second collecting times of each of the unregistered persons, and is not described in detail herein. If the second time difference between the first acquisition time and the third acquisition time is smaller than the second difference threshold value, the unidentifiable personnel corresponding to the third acquisition time can be determined to be the same-person of the registered personnel corresponding to the first acquisition time, whether the identified personnel are the safety personnel can be further automatically judged when the identified personnel are determined, the analysis is more intelligent, and the efficiency is higher.
Further, after determining the unidentified person and the registered person as the same person, the method may further include: associating the face image of the unidentifiable person corresponding to the person in the same row with the face image of the registered person and storing the face image in a preset unregistered library; and setting a second authority for the unidentified personnel. The second right may include: when receiving the preset answer of the unidentifiable person to the preset problem, the entrance guard barrier can be controlled to be opened so as to allow the unidentifiable person to pass through.
In this embodiment, whether the unrecognizable person and the registered person are in the same row or not can be determined by acquiring the acquisition time of the face image of the unrecognizable person and judging whether the acquisition time is the same as or similar to the acquisition time of the face image of the registered person or not, and when the unrecognizable person and the registered person are determined to be in the same row, the face image of the unrecognizable person is stored in the face library of the unregistered person, so that the number of the face libraries of the unregistered person can be increased, and subsequent analysis in the same row is facilitated.
Further, after the determining that the second time difference is less than or equal to the preset second difference threshold, the method may further include: sending a validity confirmation notice to the registered personnel corresponding to the first acquisition time; and when the validity confirmation result of the registered person is received, determining the unidentifiable person and the registered person as the same-person.
In this embodiment, through validity confirmation of the registered person corresponding to the first acquisition time, it can be further accurately determined whether the unrecognized person is a person in the same trip as the registered person, and it is possible to prevent the unrecognized person from being stored in the face library of the unregistered person due to an occasional trail.
Further, when the invalidity confirmation result of the registered person is received, the method may further include: associating the invalidity confirmation result with the unidentifiable person; counting the times of invalid confirmation results associated with the unidentifiable people; judging whether the counted times are greater than a preset time threshold value or not; and when the counted times are determined to be larger than the preset time threshold, adding the unidentifiable personnel into a preset blacklist.
In this embodiment, a blacklist is preset for a face image of an unidentifiable person with a potential threat. In real life, people who cannot be recognized stay around a cell and trail or track the registered people for multiple times, and the people are mistaken to be the same-person of the registered people due to the same acquisition time. When the invalidity confirmation result of the registered personnel is received and the number of the invalidity confirmation results reaches a certain degree, the registered personnel can be automatically judged as potential threat personnel, and when the potential threat personnel are identified again subsequently, the registered personnel can be informed or the potential threat personnel can be scared away in an alarm mode.
The method comprises the following steps of obtaining face data in a preset time period in a preset face library, wherein the face data comprises: face images and corresponding acquisition time; when the face images in the preset face library are determined to be the face images of the registered persons, extracting first acquisition time corresponding to all the face images of each registered person in the registered persons; when the face images in the preset face library are determined not to be the face images of the registered persons but to be the face images of the unregistered persons, extracting second acquisition time corresponding to all the face images of each unregistered person in the unregistered persons; calculating difference values of all first acquisition time values of each registered person and all second acquisition time values of each unregistered person to obtain all first time difference values; and determining the unregistered person corresponding to the second acquisition time related to the first time difference value smaller than or equal to the preset first difference threshold value in all the first time difference values and the registered person corresponding to the related first acquisition time as the same person, so that the management is convenient. And only the face data in the preset time period needs to be acquired and analyzed, so that the number of face analysis is greatly reduced, and the analyzed head portrait of the unregistered person cannot be changed along with the lapse of time. In addition, as time goes on, the face images of the unregistered persons are only increased and are not modified, the face images of the persons which cannot be identified are possibly reduced, and the management difficulty of the administrator is greatly reduced.
The pedestrian analysis method of the present invention is described in detail in the above fig. 1, and functional modules of a software system for implementing the pedestrian analysis method and a hardware system architecture for implementing the pedestrian analysis method are described below with reference to fig. 2 to 3.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
Example two
Fig. 2 is a functional block diagram of a pedestrian analysis apparatus according to a second embodiment of the present invention.
The pedestrian analysis device 20 operates in a terminal. The pedestrian analysis apparatus 20 may include a plurality of functional modules composed of program code segments. The program code of the various program segments in the pedestrian analysis apparatus 20 may be stored in the memory of the terminal and executed by the at least one processor of the terminal to perform the analysis of the fellow persons of the smart campus.
In the present embodiment, the pedestrian analysis apparatus 20 may be divided into a plurality of functional modules according to the functions performed by the pedestrian analysis apparatus. The functional module may include: the device comprises an acquisition module 201, a recognition module 202, an extraction module 203, an identification module 204, a calculation module 205, a judgment module 206, a determination module 207, an association module 208, a sending module 209 and a statistic module 210. The modules communicate with each other through at least one communication bus. The module referred to herein is a series of computer program segments capable of being executed by a processor and of performing a fixed function and stored in memory. In the present embodiment, the functions related to the respective modules correspond to the steps of the pedestrian analysis method in the above-described embodiment. The explanation of the functions of the modules is not repeated.
The obtaining module 201 is configured to obtain face data in a preset time period in a preset face library.
The recognition module 202 is configured to recognize whether the face image in the preset face library is a face image of a registered person.
In this embodiment, the identifying module 202 specifically identifies whether the face image in the preset face library is a face image of a registered person, including:
1) and detecting the face area of the face image by using a preset face detection algorithm.
In this embodiment, the preset face detection method is a preset face detection algorithm, for example, a feature-based method, a clustering-based method, an artificial neural network-based method, or a support vector machine-based method. The preset face detection algorithm is the existing face detection algorithm, and is not described in detail herein.
2) And calculating the similarity between the face region of the face image in the preset face library and the face region of the registered person.
In this embodiment, the similarity between the face region of the face image in the preset face library and the face region of the registered person may be calculated by a template matching method.
3) And judging whether the similarity is greater than a preset similarity threshold value.
The preset similarity threshold is a preset value, and may be, for example, 99%.
When the similarity is greater than or equal to the preset similarity threshold, determining the face image in the preset face library as the face image of the registered person; and when the similarity is smaller than the preset similarity threshold, determining that the face image in the preset face library is not the face image of the registered person.
An extracting module 203, configured to extract first acquisition times corresponding to all face images of each of the registered people when the identifying module 202 determines that the face images in the preset face library are the face images of the registered people.
There may be a plurality of persons determined as registered persons in the preset face library, and each person determined as a registered person may have a plurality of face images, so it is necessary to extract first acquisition times of all face images of all persons determined as each registered person in the preset face library, and sort the extracted first acquisition times corresponding to all face images belonging to the same registered person.
It should be noted that the first acquisition time referred to herein does not refer to the time when the face images in the preset face library are acquired for the first time, but for the convenience of distinguishing from the second acquisition time hereinafter, the acquisition time of the face images determined as registered persons in the preset face library is referred to as the first acquisition time, and the acquisition time of the face images determined as unregistered persons in the preset face library is referred to as the second acquisition time.
The identifying module 202 is further configured to identify whether the face image in the preset face library is a face image of an unregistered person when it is determined that the face image in the preset face library is not a face image of the registered person.
In this embodiment, a second registry may be preset, and dedicated to record information of unregistered persons. The information of the unregistered person may include: a face image of the unregistered person, an acquisition time of the unregistered person, and a number of an image acquisition apparatus that acquires the face image of the unregistered person.
In this embodiment, the face image of the unregistered person in the second registration library may be obtained through the following steps:
1) acquiring all face images in a previous preset time period (for example, 30 days) in the preset face library for the first time;
2) recognizing the face images of the registered persons in all the face images;
3) and determining the face images except the face image of the registered person in all the face images as the face images of the unregistered person and storing the face images in the second registration library.
In this embodiment, the recognition module 202 recognizes whether the face image in the preset face library is a face image of an unregistered person, which is similar to a process of the recognition module 202 recognizing whether the face image in the preset face library is a face image of a registered person, and this description is not repeated here.
An extracting module 203, configured to extract second acquisition times corresponding to all face images of each unregistered person in the unregistered persons when the identifying module 202 determines that the face images in the preset face library are the face images of the unregistered persons.
An identification module 204, configured to determine, when the recognition module 202 determines that the face image in the preset face library is not the face image of the unregistered person, that the face image in the preset face library is the face image of the person that cannot be recognized, and mark a preset identification on the face image of the person that cannot be recognized.
The calculating module 205 is configured to calculate a difference between all the first collecting times of each registered person and all the second collecting times of each unregistered person, so as to obtain all the first time difference values.
The time difference is calculated for the first acquisition time corresponding to all face images belonging to each registered person (for short: all first acquisition time belonging to the same registered person) and the second acquisition time corresponding to all face images belonging to each unregistered person (for short: all second acquisition time belonging to the same unregistered person) in the preset face library. The calculation of the first time difference value needs to be performed for all first acquisition times for all registered persons and all second acquisition times for all unregistered persons.
In this embodiment, the calculating module 205 may specifically calculate the first time difference between one first acquisition time and one second acquisition time by:
4) judging whether the first acquisition time is longer than the second acquisition time;
5) when the first acquisition time is determined to be greater than or equal to the second acquisition time, subtracting the second acquisition time from the first acquisition time to obtain a first time difference value;
6) and when the first acquisition time is determined to be smaller than the second acquisition time, subtracting the first acquisition time from the second acquisition time to obtain the first time difference value.
It will be appreciated that each first time difference value needs to be calculated for each first acquisition time and each second acquisition time. And when the difference value between all the first acquisition time of all the registered persons and all the second acquisition time of all the unregistered persons is calculated, all the first time difference values can be obtained.
The determining module 206 is configured to determine whether there is a first time difference value smaller than a preset first difference threshold in all the first time difference values.
In this embodiment, the preset first difference threshold is a preset first value, and may be, for example, 2 seconds.
A determining module 207, configured to determine, when the determining module 206 determines that there is a first time difference value smaller than or equal to the preset first difference threshold in all the first time difference values, that is, when it is determined that there is a first time difference value between the first collecting time and the second collecting time smaller than or equal to the preset first difference threshold, that the unregistered person corresponding to the second collecting time corresponding to the first time difference value smaller than or equal to the preset first difference threshold in all the first time difference values and the registered person corresponding to the relevant first collecting time are the same person.
In this embodiment, since there is a first time difference value smaller than or equal to the preset first difference threshold among all the first time difference values, i.e. a first time difference between the first acquisition time and the second acquisition time is less than or equal to said first difference threshold, the time interval between the corresponding first acquisition time and the corresponding second acquisition time for a first time difference value related to less than or equal to the first difference threshold may be considered shorter, it can be considered that the unregistered person corresponding to the face image acquired at the corresponding second acquisition time passes through the image acquisition apparatus at the same time as the registered person corresponding to the face image acquired at the corresponding first acquisition time, and the unregistered person corresponding to the face image acquired at the corresponding second acquisition time and the registered person corresponding to the face image acquired at the corresponding first acquisition time move forwards at the same time.
Further, the pedestrian analysis apparatus 20 may further include: an association module 208, configured to, after the determination module 207 determines, as a peer person, an unregistered person corresponding to a second acquisition time associated with a first time difference value smaller than or equal to a preset first difference threshold among all the first time difference values and a registered person corresponding to a related first acquisition time, associate and store a face image of the unregistered person and a face image of the registered person; and setting a first authority for the unregistered person. The first right may include: allowing direct passage through the access barrier.
In this embodiment, when it is determined that the unregistered person is a fellow person of the registered person, the unregistered person is associated with the fellow person, and the first authority is set for the unregistered person, so that the unregistered person can be conveniently identified again in the following, and the passing time of the unregistered person can be improved.
Further, the extraction module 203 may be further configured to, after the identification module 204 marks the unrecognized face image with a preset identification, extract third acquisition time corresponding to all face images of each of the unrecognized persons; the calculation module 205 may be further configured to calculate difference values between all first collecting times of all the registered people and all third collecting times of all the unidentified people, so as to obtain all second time difference values; the determining module 206 may be further configured to determine whether there is a second time difference smaller than a preset second difference threshold in all the second time differences; the determining module 207 may be further configured to, when the determining module 206 determines that there is a second time difference smaller than or equal to the preset second difference threshold in all the second time differences, determine, as a peer person, an unidentifiable person corresponding to a third acquisition time related to the second time difference smaller than or equal to the preset second difference threshold in all the second time differences and a registered person corresponding to the related first acquisition time.
In this embodiment, the difference between all the first collecting times of all the registered persons and all the third collecting times of all the persons unable to be identified is calculated to obtain all the second time differences, and the process of obtaining all the first time differences is the same as the process of calculating the difference between all the first collecting times of each of the registered persons and all the second collecting times of each of the unregistered persons, and is not described in detail herein. If the second time difference between the first acquisition time and the third acquisition time is smaller than the second difference threshold value, the unidentifiable personnel corresponding to the third acquisition time can be determined to be the same-person of the registered personnel corresponding to the first acquisition time, whether the identified personnel are the safety personnel can be further automatically judged when the identified personnel are determined, the analysis is more intelligent, and the efficiency is higher.
Further, the associating module 208 may be further configured to, after the determining module 207 determines the unrecognizable person and the registered person as the same person, associate the facial image of the unrecognizable person determined as the same person with the facial image of the registered person corresponding to the first acquisition time and store the facial image in a preset unregistered library; and setting a second authority for the unidentified personnel. The second right may include: when receiving the preset answer of the unidentifiable person to the preset problem, the entrance guard barrier can be controlled to be opened so as to allow the unidentifiable person to pass through.
In this embodiment, whether the unrecognizable person and the registered person are in the same row or not can be determined by acquiring the acquisition time of the face image of the unrecognizable person and judging whether the acquisition time is the same as or similar to the acquisition time of the face image of the registered person or not, and when the unrecognizable person and the registered person are determined to be in the same row, the face image of the unrecognizable person is stored in the face library of the unregistered person, so that the number of the face libraries of the unregistered person can be increased, and subsequent analysis in the same row is facilitated.
Further, the pedestrian analysis apparatus 20 may further include: a sending module 209, configured to send a validity confirmation notification to a registered person corresponding to the first collection time after the determining module 206 determines that the second time difference is smaller than or equal to the preset second difference threshold; the determining module 207 may be further configured to determine the unidentified person and the registered person as a peer person when a validity confirmation result of the registered person is received.
In this embodiment, through validity confirmation of the registered person corresponding to the first acquisition time, it can be further accurately determined whether the unrecognized person is a person in the same trip as the registered person, and it is possible to prevent the unrecognized person from being stored in the face library of the unregistered person due to an occasional trail.
Further, the association module 208 may be further configured to, when receiving the invalidity confirmation result of the registered person, associate the invalidity confirmation result with the unidentifiable person; the counting module 210 counts the number of times of the invalid confirmation result associated with the unidentifiable person; the judging module 206 is further configured to judge whether the counted number of times is greater than a preset number threshold; the determining module 206 is further configured to add the unidentifiable person to a preset blacklist when it is determined that the counted number of times is greater than the preset number threshold.
In this embodiment, a blacklist is preset for a face image of an unidentifiable person with a potential threat. In real life, people who cannot be recognized stay around a cell and trail or track the registered people for multiple times, and the people are mistaken to be the same-person of the registered people due to the same acquisition time. When the invalidity confirmation result of the registered personnel is received and the number of the invalidity confirmation results reaches a certain degree, the registered personnel can be automatically judged as potential threat personnel, and when the potential threat personnel are identified again subsequently, the registered personnel can be informed or the potential threat personnel can be scared away in an alarm mode.
The method comprises the following steps of obtaining face data in a preset time period in a preset face library, wherein the face data comprises: face images and corresponding acquisition time; when the face images in the preset face library are determined to be the face images of the registered persons, extracting first acquisition time corresponding to all the face images of each registered person in the registered persons; when the face images in the preset face library are determined not to be the face images of the registered persons but to be the face images of the unregistered persons, extracting second acquisition time corresponding to all the face images of each unregistered person in the unregistered persons; calculating difference values of all first acquisition time values of each registered person and all second acquisition time values of each unregistered person to obtain all first time difference values; and determining the unregistered person corresponding to the second acquisition time related to the first time difference value smaller than or equal to the preset first difference threshold value in all the first time difference values and the registered person corresponding to the related first acquisition time as the same person, so that the management is convenient. And only the face data in the preset time period needs to be acquired and analyzed, so that the number of face analysis is greatly reduced, and the analyzed head portrait of the unregistered person cannot be changed along with the lapse of time. In addition, as time goes on, the face images of the unregistered persons are only increased and are not modified, the face images of the persons which cannot be identified are possibly reduced, and the management difficulty of the administrator is greatly reduced.
EXAMPLE III
Fig. 3 is a schematic diagram of the terminal 1 according to the third embodiment of the present invention. The terminal 1 comprises a memory 20, a processor 30, a computer program 40 stored in the memory 20 and executable on the processor 30, an image acquisition device 50 and at least one communication bus 60. The processor 30, when executing the computer program 40, implements the pedestrian analysis method, such as steps 101-109 shown in fig. 1. Alternatively, the processor 30, when executing the computer program 40, implements the functions of the modules/units in the above device embodiments, such as the modules 201 to 210 in fig. 2.
Illustratively, the computer program 40 may be partitioned into one or more modules/units that are stored in the memory 20 and executed by the processor 30 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 40 in the terminal 1.
The terminal 1 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. It will be appreciated by a person skilled in the art that the schematic diagram 3 is only an example of the terminal 1 and does not constitute a limitation of the terminal 1, and may comprise more or less components than those shown, or some components may be combined, or different components, e.g. the terminal 1 may further comprise input and output devices, network access devices, buses, etc.
The processor 30 may be a central processing unit, but may also be other general purpose processors, digital signal processors, application specific integrated circuits, off-the-shelf programmable gate arrays or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor 30 may be any conventional processor or the like, said processor 30 being the control center of said terminal 1, the various parts of the whole terminal 1 being connected by means of various interfaces and lines.
The memory 20 may be used for storing the computer program 40 and/or the modules/units, and the processor 30 implements various functions of the terminal 1 by running or executing the computer program and/or the modules/units stored in the memory 20 and calling data stored in the memory 20. The memory 20 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal 1, and the like. Further, the memory 20 may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a smart memory card, a secure digital card, a flash memory card, at least one magnetic disk storage device, a flash memory device, or other volatile solid state storage device.
The integrated modules/units of the terminal 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory, random access memory, electrical carrier signal, telecommunications signal, software distribution medium, etc. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
In the embodiments provided in the present invention, it should be understood that the disclosed terminal and method can be implemented in other manners. For example, the above-described terminal embodiment is only illustrative, for example, the division of the unit is only one logical function division, and there may be another division manner in actual implementation.
In addition, functional units in the embodiments of the present invention may be integrated into the same processing unit, or each unit may exist alone physically, or two or more units are integrated into the same unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (8)

1. A pedestrian analysis method is applied to a terminal, and is characterized by comprising the following steps:
the method comprises the following steps of obtaining face data in a preset time period in a preset face library, wherein the face data comprise: face images and corresponding acquisition time;
when the face images in the preset face library are determined to be the face images of the registered persons, extracting first acquisition time corresponding to all the face images of each registered person in the registered persons;
when the face images in the preset face library are determined not to be the face images of the registered persons but to be the face images of the unregistered persons, extracting second acquisition time corresponding to all the face images of each unregistered person in the unregistered persons;
calculating difference values of all first acquisition time values of each registered person and all second acquisition time values of each unregistered person to obtain all first time difference values;
determining unregistered personnel corresponding to second acquisition time related to a first time difference value smaller than or equal to a preset first difference threshold value in all first time difference values and registered personnel corresponding to related first acquisition time as same-row personnel;
determining the face images which are not the registered person and are not the unregistered person in the preset face library as face images of persons which cannot be identified, and printing preset marks on the face images of the persons which cannot be identified;
extracting third acquisition time corresponding to all face images of each person incapable of being identified in the persons incapable of being identified;
calculating difference values of all first acquisition times of all the registered persons and all third acquisition times of all the persons which cannot be identified to obtain all second time difference values;
and determining the unidentifiable person corresponding to the third acquisition time related to the second time difference value smaller than or equal to the preset second difference value threshold value in all the second time difference values and the registered person corresponding to the related first acquisition time as the same-row person.
2. The method of claim 1, wherein after determining the unregistered person corresponding to the second acquisition time associated with the first time difference value smaller than or equal to the preset first difference threshold among all the first time difference values and the registered person corresponding to the associated first acquisition time as the same person, the method further comprises:
storing the face image of the unregistered person corresponding to the person in the same line and the face image of the registered person in a correlation manner;
and setting a first authority for the unregistered person.
3. The method of claim 1, wherein after determining the unidentified person and the registered person as co-workers, the method further comprises:
associating the face image of the unidentifiable person determined as the person in the same row with the face image of the registered person in the same row and storing the face image in a preset unregistered library;
and setting a second authority for the unidentified personnel.
4. The method of claim 3, wherein the second right comprises: and when receiving a preset answer to a preset problem from the unidentified person, controlling the entrance guard barrier gate to be opened.
5. The method according to any one of claims 1 to 4, wherein the face image of the unregistered person is obtained by:
acquiring all face images in a previous preset time period in the preset face library for the first time;
recognizing the face images of the registered persons in all the face images;
and determining the face images except the face image of the registered person in all the face images as the face images of the unregistered person and storing the face images in a second registration library.
6. A pedestrian analysis apparatus operating in a terminal, the apparatus comprising:
the system comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring face data in a preset time period in a preset face library, and the face data comprises: face images and corresponding acquisition time;
the extraction module is used for extracting first acquisition time corresponding to all face images of each registered person in the registered persons when the recognition module determines that the face images in the preset face library are the face images of the registered persons;
the extraction module is further configured to extract a second acquisition time corresponding to all face images of each unregistered person in the unregistered persons when the recognition module determines that the face images in the preset face library are not the face images of the registered persons but the face images of the unregistered persons;
the calculation module is used for calculating difference values of all first acquisition time of each registered person and all second acquisition time of each unregistered person to obtain all first time difference values; the determining module is used for determining unregistered personnel corresponding to second acquisition time related to a first time difference value smaller than or equal to a preset first difference value threshold value in all first time difference values and registered personnel corresponding to related first acquisition time as same-row personnel;
the identification module is used for determining the face image which is not the registered person and is not the unregistered person in the preset face library as the face image of the person which cannot be identified, and printing a preset identification on the face image of the person which cannot be identified;
the extraction module is further configured to extract third acquisition times corresponding to all face images of each of the unidentifiable people;
the calculation module is further configured to calculate difference values between all first acquisition times of all the registered persons and all third acquisition times of all the unidentified persons to obtain all second time difference values;
the determining module is further configured to determine, as a person in the same row, the unidentifiable person corresponding to a third acquisition time associated with a second time difference value smaller than or equal to a preset second difference threshold among all the second time difference values and the registered person corresponding to the associated first acquisition time.
7. A terminal, characterized in that the terminal comprises a processor for implementing the pedestrian analysis method according to any one of claims 1 to 5 when executing a computer program stored in a memory.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the pedestrian analysis method according to any one of claims 1 to 5.
CN201810904924.2A 2018-08-09 2018-08-09 Pedestrian analysis method, device, terminal and storage medium Active CN110874878B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810904924.2A CN110874878B (en) 2018-08-09 2018-08-09 Pedestrian analysis method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810904924.2A CN110874878B (en) 2018-08-09 2018-08-09 Pedestrian analysis method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110874878A CN110874878A (en) 2020-03-10
CN110874878B true CN110874878B (en) 2021-09-14

Family

ID=69714126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810904924.2A Active CN110874878B (en) 2018-08-09 2018-08-09 Pedestrian analysis method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110874878B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111915788A (en) * 2020-07-27 2020-11-10 成都捷顺宝信息科技有限公司 Anti-trailing method and system based on face recognition technology
CN111932759B (en) * 2020-08-15 2021-11-30 湖南华宽通科技股份有限公司 Smart park Internet of things data processing platform and method
CN112017347B (en) * 2020-08-27 2023-04-25 日立楼宇技术(广州)有限公司 Visitor registration method and device, electronic equipment and storage medium
CN112258719B (en) * 2020-10-14 2022-07-08 杭州海康威视数字技术股份有限公司 Access control system, identity authentication method and access control equipment
CN113313865A (en) * 2021-02-02 2021-08-27 江苏商贸职业学院 Linkage security protection reminding mechanism for face recognition
CN113269916B (en) * 2021-05-17 2022-04-19 武汉爱迪科技股份有限公司 Guest prejudging analysis method and system based on face recognition

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2976106B1 (en) * 2011-06-01 2015-07-17 Morpho SYSTEM AND METHOD FOR CONTROLLING THE ACCESS OF AN INDIVIDUAL TO A CONTROLLED ACCESS AREA
CN103679215B (en) * 2013-12-30 2017-03-01 中国科学院自动化研究所 The video frequency monitoring method of the groupment behavior analysiss that view-based access control model big data drives
CN103942869B (en) * 2014-05-09 2017-01-04 山东大学 A kind of method that testing staff passes in and out quantity in construction elevator
CN104599367A (en) * 2014-12-31 2015-05-06 苏州福丰科技有限公司 Multi-user parallel access control recognition method based on three-dimensional face image recognition
CN105208528B (en) * 2015-09-24 2018-05-22 山东合天智汇信息技术有限公司 A kind of system and method for identifying with administrative staff
CN105741392B (en) * 2016-01-29 2018-12-28 成都比善科技开发有限公司 Cell access control system visitor records system and method
CN105761339B (en) * 2016-01-29 2017-09-29 成都比善科技开发有限公司 A kind of cell entrance guard management system and method
CN205722045U (en) * 2016-04-22 2016-11-23 广州迈得豪电子科技有限公司 A kind of detection device of low-power consumption flow of the people
CN105869388B (en) * 2016-05-31 2018-09-04 苏州朗捷通智能科技有限公司 The analysis method and system of a kind of acquisition of bus passenger flow data and origin and destination
CN106408832B (en) * 2016-08-29 2018-10-12 重庆交通大学 A kind of monitoring method and system of noiseless visitor
CN107679613A (en) * 2017-09-30 2018-02-09 同观科技(深圳)有限公司 A kind of statistical method of personal information, device, terminal device and storage medium

Also Published As

Publication number Publication date
CN110874878A (en) 2020-03-10

Similar Documents

Publication Publication Date Title
CN110874878B (en) Pedestrian analysis method, device, terminal and storage medium
CN108446681B (en) Pedestrian analysis method, device, terminal and storage medium
US20220092881A1 (en) Method and apparatus for behavior analysis, electronic apparatus, storage medium, and computer program
CN109635146B (en) Target query method and system based on image characteristics
US20190087464A1 (en) Regional population management system and method
CN111291682A (en) Method and device for determining target object, storage medium and electronic device
CN109712291B (en) Opening method and device of electronic gate and server
CN105868693A (en) Identity authentication method and system
CN111476685B (en) Behavior analysis method, device and equipment
CN112507314B (en) Client identity verification method, device, electronic equipment and storage medium
CN113938827A (en) Method, device, equipment and storage medium for verifying communication number user
CN112183161B (en) Face database processing method, device and equipment
CN111079469B (en) Face image processing method, device, equipment and readable storage medium
CN109885994B (en) Offline identity authentication system, device and computer readable storage medium
CN113158958B (en) Traffic method and related device
CN115391596A (en) Video archive generation method and device and storage medium
CN113449563A (en) Personnel tracking and marking method and device, electronic equipment and storage medium
CN113689613A (en) Access control system, access control method, and storage medium
CN113052100A (en) Traffic identification method and related device
CN112699810A (en) Method and device for improving figure identification precision of indoor monitoring system
CN112102551A (en) Device control method, device, electronic device and storage medium
CN112149475A (en) Luggage case verification method, device and system and storage medium
CN111368115A (en) Data clustering method and device, clustering server and storage medium
CN113128262A (en) Target identification method and device, storage medium and electronic device
CN111159159B (en) Public traffic passing method, device, equipment and system based on history passing record

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant