CN112818922A - Store clerk identification method based on image - Google Patents

Store clerk identification method based on image Download PDF

Info

Publication number
CN112818922A
CN112818922A CN202110210844.9A CN202110210844A CN112818922A CN 112818922 A CN112818922 A CN 112818922A CN 202110210844 A CN202110210844 A CN 202110210844A CN 112818922 A CN112818922 A CN 112818922A
Authority
CN
China
Prior art keywords
action
pool
clustering
pedestrian
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110210844.9A
Other languages
Chinese (zh)
Other versions
CN112818922B (en
Inventor
程安民
赵宇迪
施侃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shuchuan Data Technology Co ltd
Original Assignee
Shanghai Shuchuan Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shuchuan Data Technology Co ltd filed Critical Shanghai Shuchuan Data Technology Co ltd
Priority to CN202110210844.9A priority Critical patent/CN112818922B/en
Publication of CN112818922A publication Critical patent/CN112818922A/en
Application granted granted Critical
Publication of CN112818922B publication Critical patent/CN112818922B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention discloses a store clerk identification method based on images, which comprises the steps of establishing/maintaining a pedestrian action cluster pool queue and determining the store clerk. The method comprises the steps of installing a camera on an off-line shop, carrying out video acquisition in the shop, carrying out pedestrian detection on the acquired video by using a deep neural network, simultaneously carrying out action feature extraction on the detected pedestrian by using the deep neural network to obtain action feature vectors, effectively matching the actions of the acquired pedestrians by using Euclidean distances among different action feature vectors as a clustering matching condition, using the total time of the acquisition of the video as a time axis, dividing the time axis at the same time interval to obtain time slices, simultaneously establishing a time distribution histogram by using the time slices as units, and combining the condition that the frequency of each action class appearing at a certain position in the shop within a period of time after clustering matching with the distribution condition of each action class in the time histogram to realize shop employee identification.

Description

Store clerk identification method based on image
Technical Field
The invention relates to the technical field of image identification, in particular to a store clerk identification method based on images.
Background
In recent years, most methods for identifying store clerks include, in addition to a method for identifying store clerks by manually querying surveillance videos, techniques commonly used at present:
1. image-based face recognition: capturing a face in an image, extracting face features of the captured face, and matching the extracted face features with face features of a shop assistant recorded by a system to realize the recognition of the shop assistant;
2. image-based pedestrian trajectory tracking: capturing pedestrians through images, tracking the pedestrians by using a pedestrian tracking algorithm to obtain the track of each pedestrian, and finally distinguishing the track of each pedestrian to realize the identification of store clerks;
3. the shop assistant wears the positioning tag: the store clerk carries a positioning tag (such as an FRID tag) with the store clerk, a tag acquisition device is deployed in the store, and the store clerk identification is realized by acquiring the position where the tag appears.
The method for identifying the shop assistant by manually checking the monitoring video is one of the most common methods, and has the advantages that the shop assistant in the video can be accurately and effectively identified, and the defect that the manual identification method brings huge labor cost; the method for enabling store personnel to wear the positioning tags needs to deploy a large number of tag collecting devices in a store, the cost is increased, the pedestrian trajectory tracking method based on images is used for tracking the pedestrians by capturing the pedestrians through a pedestrian tracking algorithm to obtain the trajectories of the pedestrians, the store personnel identification is realized by judging the trajectories of the pedestrians, the trajectory tracking method is highly dependent on the correctness of the tracking trajectory, but multiple cameras are needed to be crossed under the shop scene, the pedestrian trajectory tracking is carried out for a long time, the tracking correctness under the condition is relatively low, different pedestrians can be tracked into the same trajectory with a certain probability, the trajectory tracking error is caused, and once the tracking error occurs, the identification effect of the store personnel cannot be guaranteed.
Disclosure of Invention
The invention aims to provide an image-based store clerk identification method, which has the advantage of high identification precision and solves the problems in the background art.
In order to achieve the purpose, the invention provides the following technical scheme: a shop assistant identification method based on images comprises the steps of establishing/maintaining a pedestrian action cluster pool queue and shop assistant judgment, wherein the establishing/maintaining of the pedestrian action cluster pool queue comprises the following steps:
the method comprises the following steps: acquiring videos of the offline stores through a camera of the offline stores, detecting pedestrians of the acquired videos by using a deep learning neural network method to obtain an individual pedestrian picture, and extracting action features of the individual pedestrians to obtain action feature vectors;
step two: after acquiring the action characteristics of the pedestrians, starting to perform action clustering matching operation of the pedestrians, wherein the specific clustering process is as follows:
1. and when one pedestrian action feature is extracted, creating a corresponding action object for the action feature.
2. After an action object is created, next, a feature vector in the action object is used for matching with an average feature vector in each action clustering pool in an action clustering pool queue, the matching method adopts Euclidean distance, an action clustering pool closest to the current action feature vector is found, the action object is added into the action clustering pool, the action clustering pool queue is a data structure for aggregating and matching action objects with similar Euclidean distance of the action features of captured pedestrians, when the action clustering pool queue is empty, an action clustering pool needs to be created, the data structure of the action clustering pool comprises the time of creation of each action clustering pool and the number of pedestrian action objects stored in the pool, and an average motion feature vector for clustering and a timeslice list for a cluster pool.
3. And finding the corresponding time slice according to the time of creating each pedestrian object, adding the objects, updating the average characteristic vector of the pedestrian clustering pool, and creating a new clustering pool object if the clustering pool is empty or a clustering pool object similar to the current action characteristic vector is not found.
The clerk determines that the method comprises the following steps:
the method comprises the following steps: after the pedestrian action clustering pool queue is obtained, store personnel judgment is carried out according to pedestrian results matched in the time slice list of each clustering pool in the clustering pool queue, after clustering operation is carried out for a period of time, the system starts to carry out store personnel identification operation, and the time for starting capturing pedestrian operation is set as TsThe current time is TnWhen T isn-TsIf the number is more than 3, starting to perform shop assistant judgment operation on each action clustering pool in the pedestrian clustering pool queue;
step two: searching each cluster pool in the cluster pool queue, and acquiring the creation time of each cluster pool as TcWhen T is satisfiedn-TcIf the number of the action objects in the object clustering pool is more than 3, establishing a time distribution histogram of the action object by using the time slices of the clustering pool, wherein one time slice is marked as a bar, the horizontal axis of the histogram is a time slice axis, and the vertical axis of the histogram is the number of the pedestrian objects in each bar in the pedestrian clustering pool, namely the number of the action objects belonging to the object clustering pool in one time slice;
step three: the storeman identification is carried out by counting the number of valid bars in the pedestrian clustering pool, and if the number of the pedestrian objects in the bars is more than 2, the bars are marked as valid, otherwise, the bars are invalid, and T is in the current pedestrian clustering poolnTo Tn-3 when the number of valid bars is greater than 6, recording as each pedestrian action object in the action clustering poolAre all shop staffs and simultaneously handle TnTo TnAnd deleting the clustering pool with the effective bar number of-3 less than 3 from the clustering pool list, and repeating the operation over time.
Preferably, in the second step of establishing/maintaining the pedestrian action cluster pool queue, in process 3, the specific clustering method of the new cluster pool object is as follows: when the cluster pool queue is empty, a new action cluster pool object needs to be created, when the action cluster pool queue is not empty, the feature vector of the current action object is matched with the average vector of each cluster pool in the cluster pool queue, the matching mode is Euclidean distance, and the formula of the Euclidean distance is
Figure BDA0002952230760000041
If the calculated Euclidean distance is within the range of the threshold value A, adding the pedestrian object into a corresponding action object list in a time slice list corresponding to the clustering pool, and updating the average characteristic vector of the clustering pool; if the obtained Euclidean distance is out of the range of the threshold value A, the pedestrian object does not belong to any action clustering pool in the clustering pool queue, and a new action clustering pool is created by using the pedestrian object.
Preferably, in the second step of establishing/maintaining the pedestrian action cluster pool queue, the process 1 is as follows: the action object comprises an action characteristic vector, a timestamp and an action object attribute when the pedestrian acts are detected, wherein the action characteristic vector is extracted after the pedestrian action characteristic feature extraction operation; the pedestrian timestamp is the time when the video detected the pedestrian's action; the motion object attribute includes coordinates of the pedestrian motion in the current detection picture.
Preferably, in the second step of establishing/maintaining the pedestrian action cluster pool queue, the flow 3: the average feature vector is the average of all feature vectors in one action cluster pool.
Preferably, in the second step of establishing/maintaining the pedestrian action cluster pool queue, the flow 3: the average vector is calculated by adding the motion feature vectors of all the objects and dividing the sum by the number of pedestrian objects.
Compared with the prior art, the invention has the following beneficial effects:
the invention carries out video acquisition in the store by installing a camera on the off-line store, carries out pedestrian detection on the acquired video by using a deep neural network, simultaneously carries out action feature extraction on the detected pedestrian by using the deep neural network to obtain action feature vectors, uses Euclidean distances among different action feature vectors as a clustering matching condition to effectively match the actions of the acquired pedestrian, uses the total time of the video acquisition as a time axis, divides the time axis by the same time interval to obtain time slices, simultaneously establishes a time distribution histogram by taking the time slices as a unit, realizes the store employee identification according to the distribution condition of each action class in the time histogram after the clustering matching and the condition that the frequency of the store employee appearing at a certain position in the store within a period of time is relatively frequent, and has the advantage of using the action features as the clustering matching condition, even if a certain proportion of matching errors exist in cluster matching, a clustering result cannot be changed, and meanwhile, effective identification of store clerks is correctly carried out.
Drawings
FIG. 1 is a schematic view of a pedestrian motion feature vector collection process according to the present invention;
FIG. 2 is a diagram of an action object data structure according to the present invention;
FIG. 3 is a diagram of a cluster pool queue data structure in accordance with the present invention;
FIG. 4 is a data structure diagram of the action cluster pool of the present invention;
FIG. 5 is a diagram of a slice list data structure in accordance with the present invention;
FIG. 6 is a flow chart of pedestrian clustering overall data according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-6, the present invention provides a technical solution: a shop assistant identification method based on images comprises the steps of establishing/maintaining a pedestrian action cluster pool queue and shop assistant judgment, wherein the establishment/maintenance of the pedestrian action cluster pool queue comprises the following steps:
the method comprises the following steps: the method comprises the steps of acquiring videos of the offline stores through a camera of the offline stores, detecting pedestrians of the acquired videos by using a deep learning neural network method, obtaining independent pedestrian pictures, and meanwhile, extracting action features of the independent pedestrians to obtain action feature vectors.
Step two: after acquiring the action characteristics of the pedestrians, starting to perform action clustering matching operation of the pedestrians, wherein the specific clustering process is as follows:
1. after a pedestrian action feature is extracted, a corresponding action object is created for the action feature, the action object comprises an action feature vector, a timestamp and an action object attribute when the pedestrian acts, wherein the action feature vector is extracted after the pedestrian action feature extraction operation; the pedestrian timestamp is the time when the video detected the pedestrian's action; the motion object attribute includes coordinates of the pedestrian motion in the current detection picture.
2. After an action object is created, next, a feature vector in the action object is used for matching with an average feature vector in each action clustering pool in an action clustering pool queue, the matching method adopts Euclidean distance, an action clustering pool closest to the current action feature vector is found, the action object is added into the action clustering pool, the action clustering pool queue is a data structure for aggregating and matching action objects with similar Euclidean distance of the action features of captured pedestrians, when the action clustering pool queue is empty, an action clustering pool needs to be created, the data structure of the action clustering pool comprises the time of creation of each action clustering pool and the number of pedestrian action objects stored in the pool, and an average motion feature vector for clustering and a timeslice list for a cluster pool.
3. According to the time of creating each pedestrian object, finding the time slice corresponding to each pedestrian object, adding the objects, and updating the average characteristic vector of the pedestrian clustering pool, wherein the average characteristic vector is the average value of all characteristic vectors in an action clustering pool, the calculation mode of the average vector is that the action characteristic vectors of all objects are added and divided by the number of the pedestrian objects, if the clustering pool is empty or a clustering pool object close to the current action characteristic vector is not found, a new clustering pool object needs to be created, and the specific clustering method of the new clustering pool object is as follows: when the cluster pool queue is empty, a new action cluster pool object needs to be created, when the action cluster pool queue is not empty, the feature vector of the current action object is matched with the average vector of each cluster pool in the cluster pool queue, the matching mode is Euclidean distance, and the formula of the Euclidean distance is
Figure BDA0002952230760000071
If the calculated Euclidean distance is within the range of the threshold value A, adding the pedestrian object into a corresponding action object list in a time slice list corresponding to the clustering pool, and updating the average characteristic vector of the clustering pool; if the obtained Euclidean distance is out of the range of the threshold value A, the pedestrian object does not belong to any action clustering pool in the clustering pool queue, and a new action clustering pool is created by using the pedestrian object.
The store clerk determination comprises the following steps:
the method comprises the following steps: after the pedestrian action clustering pool queue is obtained, store personnel judgment is carried out according to pedestrian results matched in the time slice list of each clustering pool in the clustering pool queue, after clustering operation is carried out for a period of time, the system starts to carry out store personnel identification operation, and the time for starting capturing pedestrian operation is set as TsThe current time is TnWhen T isn-TsAnd if the number is more than 3, starting to perform shop assistant judgment operation on each action clustering pool in the pedestrian clustering pool queue.
Step two: searching each cluster pool in the cluster pool queue, and acquiring the creation time of each cluster pool as TcWhen T is satisfiedn-TcAnd if the bar number is more than 3, establishing a time distribution histogram of the action object by using the time slices of the clustering pool, wherein one time slice is marked as one bar, the horizontal axis of the histogram is a time slice axis, and the vertical axis of the histogram is the number of the pedestrian objects in each bar in the pedestrian clustering pool, namely the number of the action objects belonging to the object clustering pool in one time slice.
Step three: the storeman identification is carried out by counting the number of valid bars in the pedestrian clustering pool, and if the number of the pedestrian objects in the bars is more than 2, the bars are marked as valid, otherwise, the bars are invalid, and T is in the current pedestrian clustering poolnTo Tn-3 when the effective bar number is more than 6, recording that each pedestrian action object in the action clustering pool is a shop assistant, and simultaneously recording TnTo TnAnd deleting the clustering pool with the effective bar number of-3 less than 3 from the clustering pool list, and repeating the operation over time.
In summary, the following steps: in the shop assistant identification method based on the image, the shop assistant identification is to effectively identify the shop assistant appearing in the video collected by the camera, but the store clerk identification referred to in the method can only be used for distinguishing whether a captured pedestrian is a store clerk or not, and can not be used to determine the personal identity information of each store clerk, so-called pedestrian action clustering is to perform pedestrian detection on the videos of the offline stores, extracting the motion characteristics of the detected pedestrian to obtain the vector of the motion characteristics of the pedestrian, simultaneously, the Euclidean distance between the characteristic vectors of different actions is used as a clustering matching condition, and to aggregate all detected pedestrian motions within a certain range of euclidean distance, the clustering result of the method can aggregate and match the same action of different people, and the solution is summarized as follows: firstly, video acquisition is carried out on an offline store by utilizing a camera deployed in the offline store, pedestrians in the video are detected by using a deep learning neural network at the same time of acquisition, the detection interval is one frame per second, when a pedestrian is detected in one frame of image, a picture of the detected pedestrian is obtained, action feature extraction is carried out on the obtained picture by using the deep learning neural network to obtain action features of the pedestrian, simultaneously, Euclidean distances among the features are used as clustering matching conditions among different actions to obtain different action clustering pools, the time of capturing each pedestrian in the clustering pools and the time of starting to carry out video of the pedestrian are used as time conditions to establish a time relation distribution histogram, and the characteristic that actions of a store clerk at a certain position in a period of time are repeated continuously is utilized by judging the distribution condition of the time histogram of the action clustering pools, the method has the advantages that the pedestrian track is not tracked, the characteristic that the action of a store clerk appearing at a certain position within a certain period of time in a store year is repeated continuously is utilized, the cluster matching of the actions of the pedestrians captured at different moments is simple, the recognition of the store clerk can be realized through certain distinguishing conditions, and the difficulty of the recognition of the store clerk is reduced. By using the identification mode, even if the matching result has a certain proportion of matching errors, the store clerk can be effectively identified.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The electrical components presented in the document are all electrically connected with an external master controller and 220V mains, and the master controller can be a conventional known device controlled by a computer or the like.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (5)

1. An image-based store clerk identification method comprises the steps of establishing/maintaining a pedestrian action cluster pool queue and store clerk judgment, and is characterized in that: the establishing/maintaining of the pedestrian action cluster pool queue comprises the following steps:
the method comprises the following steps: acquiring videos of the offline stores through a camera of the offline stores, detecting pedestrians of the acquired videos by using a deep learning neural network method to obtain an individual pedestrian picture, and extracting action features of the individual pedestrians to obtain action feature vectors;
step two: after acquiring the action characteristics of the pedestrians, starting to perform action clustering matching operation of the pedestrians, wherein the specific clustering process is as follows:
1. and when one pedestrian action feature is extracted, creating a corresponding action object for the action feature.
2. After an action object is created, next, a feature vector in the action object is used for matching with an average feature vector in each action clustering pool in an action clustering pool queue, the matching method adopts Euclidean distance, an action clustering pool closest to the current action feature vector is found, the action object is added into the action clustering pool, the action clustering pool queue is a data structure for aggregating and matching action objects with similar Euclidean distance of the action features of captured pedestrians, when the action clustering pool queue is empty, an action clustering pool needs to be created, the data structure of the action clustering pool comprises the time of creation of each action clustering pool and the number of pedestrian action objects stored in the pool, and an average motion feature vector for clustering and a timeslice list for a cluster pool.
3. And finding the corresponding time slice according to the time of creating each pedestrian object, adding the objects, updating the average characteristic vector of the pedestrian clustering pool, and creating a new clustering pool object if the clustering pool is empty or a clustering pool object similar to the current action characteristic vector is not found.
The clerk determines that the method comprises the following steps:
the method comprises the following steps: after the pedestrian action clustering pool queue is obtained, store personnel judgment is carried out according to pedestrian results matched in the time slice list of each clustering pool in the clustering pool queue, after clustering operation is carried out for a period of time, the system starts to carry out store personnel identification operation, and the time for starting capturing pedestrian operation is set as TsThe current time is TnWhen T isn-TsIf the number is more than 3, starting to perform shop assistant judgment operation on each action clustering pool in the pedestrian clustering pool queue;
step two: searching each cluster pool in the cluster pool queue, and acquiring the creation time of each cluster pool as TcWhen T is satisfiedn-TcIf the number of the action objects in the object clustering pool is more than 3, establishing a time distribution histogram of the action object by using the time slices of the clustering pool, wherein one time slice is marked as a bar, the horizontal axis of the histogram is a time slice axis, and the vertical axis of the histogram is the number of the pedestrian objects in each bar in the pedestrian clustering pool, namely the number of the action objects belonging to the object clustering pool in one time slice;
step three: the store clerk identification is carried out by counting the number of effective bars in the pedestrian clustering pool, and the number of pedestrian objects in the bars is considered to be largeAt 2, then the bar is recorded as valid, otherwise it is invalid, T is in the current pedestrian clustering poolnTo Tn-3 when the effective bar number is more than 6, recording that each pedestrian action object in the action clustering pool is a shop assistant, and simultaneously recording TnTo TnAnd deleting the clustering pool with the effective bar number of-3 less than 3 from the clustering pool list, and repeating the operation over time.
2. The image-based store clerk identification method according to claim 1, wherein: in the second flow 3 of the step of establishing/maintaining the pedestrian action cluster pool queue, the specific clustering method of the new cluster pool object is as follows: when the cluster pool queue is empty, a new action cluster pool object needs to be created, when the action cluster pool queue is not empty, the feature vector of the current action object is matched with the average vector of each cluster pool in the cluster pool queue, the matching mode is Euclidean distance, and the formula of the Euclidean distance is
Figure FDA0002952230750000021
If the calculated Euclidean distance is within the range of the threshold value A, adding the pedestrian object into a corresponding action object list in a time slice list corresponding to the clustering pool, and updating the average characteristic vector of the clustering pool; if the obtained Euclidean distance is out of the range of the threshold value A, the pedestrian object does not belong to any action clustering pool in the clustering pool queue, and a new action clustering pool is created by using the pedestrian object.
3. The image-based store clerk identification method according to claim 1, wherein: the step two of establishing/maintaining the pedestrian action cluster pool queue comprises the following steps: the action object comprises an action characteristic vector, a timestamp and an action object attribute when the pedestrian acts are detected, wherein the action characteristic vector is extracted after the pedestrian action characteristic feature extraction operation; the pedestrian timestamp is the time when the video detected the pedestrian's action; the motion object attribute includes coordinates of the pedestrian motion in the current detection picture.
4. The image-based store clerk identification method according to claim 1, wherein: the step two of establishing/maintaining the pedestrian action cluster pool queue comprises a process 3: the average feature vector is the average of all feature vectors in one action cluster pool.
5. The image-based store clerk identification method according to claim 1, wherein: the step two of establishing/maintaining the pedestrian action cluster pool queue comprises a process 3: the average vector is calculated by adding the motion feature vectors of all the objects and dividing the sum by the number of pedestrian objects.
CN202110210844.9A 2021-02-25 2021-02-25 Shop assistant identification method based on image Active CN112818922B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110210844.9A CN112818922B (en) 2021-02-25 2021-02-25 Shop assistant identification method based on image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110210844.9A CN112818922B (en) 2021-02-25 2021-02-25 Shop assistant identification method based on image

Publications (2)

Publication Number Publication Date
CN112818922A true CN112818922A (en) 2021-05-18
CN112818922B CN112818922B (en) 2022-08-02

Family

ID=75865594

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110210844.9A Active CN112818922B (en) 2021-02-25 2021-02-25 Shop assistant identification method based on image

Country Status (1)

Country Link
CN (1) CN112818922B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7957565B1 (en) * 2007-04-05 2011-06-07 Videomining Corporation Method and system for recognizing employees in a physical space based on automatic behavior analysis
CN104408396A (en) * 2014-08-28 2015-03-11 浙江工业大学 Action recognition method of locality matching window based on temporal pyramid
CN109448026A (en) * 2018-11-16 2019-03-08 南京甄视智能科技有限公司 Passenger flow statistical method and system based on head and shoulder detection
CN109886068A (en) * 2018-12-20 2019-06-14 上海至玄智能科技有限公司 Action behavior recognition methods based on exercise data
CN109902565A (en) * 2019-01-21 2019-06-18 深圳市烨嘉为技术有限公司 The Human bodys' response method of multiple features fusion
CN109978668A (en) * 2019-04-02 2019-07-05 苏州工业职业技术学院 Based on the matched transaction customer recognition methods of timestamp, system, equipment and medium
US20190303492A1 (en) * 2018-03-29 2019-10-03 Hamilton Sundstrand Corporation Data-driven features via signal clustering
CN111126119A (en) * 2018-11-01 2020-05-08 百度在线网络技术(北京)有限公司 Method and device for counting user behaviors arriving at store based on face recognition

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7957565B1 (en) * 2007-04-05 2011-06-07 Videomining Corporation Method and system for recognizing employees in a physical space based on automatic behavior analysis
CN104408396A (en) * 2014-08-28 2015-03-11 浙江工业大学 Action recognition method of locality matching window based on temporal pyramid
US20190303492A1 (en) * 2018-03-29 2019-10-03 Hamilton Sundstrand Corporation Data-driven features via signal clustering
CN111126119A (en) * 2018-11-01 2020-05-08 百度在线网络技术(北京)有限公司 Method and device for counting user behaviors arriving at store based on face recognition
CN109448026A (en) * 2018-11-16 2019-03-08 南京甄视智能科技有限公司 Passenger flow statistical method and system based on head and shoulder detection
CN109886068A (en) * 2018-12-20 2019-06-14 上海至玄智能科技有限公司 Action behavior recognition methods based on exercise data
CN109902565A (en) * 2019-01-21 2019-06-18 深圳市烨嘉为技术有限公司 The Human bodys' response method of multiple features fusion
CN109978668A (en) * 2019-04-02 2019-07-05 苏州工业职业技术学院 Based on the matched transaction customer recognition methods of timestamp, system, equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨江峰: "基于视频的人体动作分析与识别的研究", 《万方学位》 *

Also Published As

Publication number Publication date
CN112818922B (en) 2022-08-02

Similar Documents

Publication Publication Date Title
CN106998444B (en) Big data face monitoring system
CN104881637B (en) Multimodal information system and its fusion method based on heat transfer agent and target tracking
CN104091176B (en) Portrait comparison application technology in video
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
Avgerinakis et al. Recognition of activities of daily living for smart home environments
CN111145223A (en) Multi-camera personnel behavior track identification analysis method
Li et al. Robust people counting in video surveillance: Dataset and system
CN107657232B (en) Pedestrian intelligent identification method and system
CN107153820A (en) A kind of recognition of face and movement locus method of discrimination towards strong noise
Xu et al. Real-time pedestrian detection based on edge factor and Histogram of Oriented Gradient
CN109344842A (en) A kind of pedestrian's recognition methods again based on semantic region expression
CN107230267A (en) Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method
CN110796074A (en) Pedestrian re-identification method based on space-time data fusion
CN112116635A (en) Visual tracking method and device based on rapid human body movement
CN111723773A (en) Remnant detection method, device, electronic equipment and readable storage medium
CN111079720B (en) Face recognition method based on cluster analysis and autonomous relearning
CN111027385A (en) Clustering visitor counting method, system, equipment and computer readable storage medium
CN103605993A (en) Image-to-video face identification method based on distinguish analysis oriented to scenes
CN105447463A (en) Camera-crossing automatic tracking system for transformer station based on human body feature recognition
CN112818922B (en) Shop assistant identification method based on image
Taha et al. Exploring behavior analysis in video surveillance applications
CN106845361B (en) Pedestrian head identification method and system
CN115272967A (en) Cross-camera pedestrian real-time tracking and identifying method, device and medium
CN115861869A (en) Gait re-identification method based on Transformer
CN110830734B (en) Abrupt change and gradual change lens switching identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant