CN115424345A - Behavior analysis method and related device - Google Patents

Behavior analysis method and related device Download PDF

Info

Publication number
CN115424345A
CN115424345A CN202211064835.4A CN202211064835A CN115424345A CN 115424345 A CN115424345 A CN 115424345A CN 202211064835 A CN202211064835 A CN 202211064835A CN 115424345 A CN115424345 A CN 115424345A
Authority
CN
China
Prior art keywords
behavior
information
intention
video
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211064835.4A
Other languages
Chinese (zh)
Inventor
常艺馨
孙悦
余胜男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202211064835.4A priority Critical patent/CN115424345A/en
Publication of CN115424345A publication Critical patent/CN115424345A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences

Abstract

The application discloses a behavior analysis method and a related device, which can be applied to the field of big data and the field of finance. The invention can obtain a plurality of videos shot by a plurality of cameras; identifying the persons appearing in each video based on a cross-mirror tracking technology ReiD, so as to obtain behavior track information, stay time information and a complete video corresponding to the behavior track information of at least one person; based on a human body action recognition technology, recognizing the action posture of the figure in the complete video to obtain the action posture information of the figure; and analyzing and determining the behavior intention of the person according to the behavior track information, the stay time information and the action posture information. Therefore, the invention can obtain the behavior intention of the client by identification so as to provide corresponding automatic service for the client in a targeted manner, thereby improving the service efficiency.

Description

Behavior analysis method and related device
Technical Field
The invention relates to the field of big data and the field of finance, in particular to a behavior analysis method and a related device.
Background
With the development of the internet, banking businesses continuously release online transaction processes, the number of staff at a website is reduced, and the number of customers who choose to go to the website to transact business is still larger than that of staff. Especially, during business handling rush hours, the hall staff often cannot provide help for all clients in time, and a lot of service omission phenomena occur, such as: the early-arriving client who forgets to wait for consultation can be dredged to remind the safe distance of the automatic teller machine in the intelligent counter and the network point when many people are in need, and the old, children and other objects needing special care can not be noticed. That is, the manual service often causes problems of low service efficiency and untimely service because the intention of the client cannot be grasped quickly in time.
Disclosure of Invention
In view of the above, the present invention provides a behavior analysis method and related apparatus that overcomes or at least partially solves the above problems.
In a first aspect, a method of behavioral analysis, comprises:
obtaining a plurality of videos shot by a plurality of cameras;
identifying the persons appearing in each video based on a cross-mirror tracking technology ReiD, so as to obtain behavior track information, stay time information and a complete video corresponding to the behavior track information of at least one person;
based on a human body action recognition technology, recognizing the action posture of the figure in the complete video to obtain action posture information of the figure;
and analyzing and determining the behavior intention of the person according to the behavior track information, the stay time information and the action posture information.
With reference to the first aspect, in some optional embodiments, the obtaining a plurality of videos captured by a plurality of cameras includes:
and obtaining a plurality of videos of different areas in the same scene shot by a plurality of cameras.
With reference to the first aspect, in some optional embodiments, the analyzing and determining the behavioral intention of the person according to the behavioral track information, the stay time information, and the motion posture information includes:
analyzing human key points of the character according to the behavior track information, the stay time information and the action posture information, thereby obtaining a characteristic set of the character;
and comparing the characteristics of the characteristic set with a pre-trained characteristic set library to determine the behavior intention of the person corresponding to the characteristic set, wherein the behavior intentions corresponding to different characteristic sets are pre-established in the characteristic set library.
With reference to the first aspect, in some optional embodiments, after the analyzing and determining the behavioral intention of the character according to the behavioral trajectory information, the stay time information and the action posture information, the method further includes:
and sending out a corresponding alarm prompt through a microphone or a display screen according to the behavior intention.
In a second aspect, a behavior analysis device includes: a video obtaining unit, a person tracking unit, a posture identifying unit and an intention determining unit;
the video obtaining unit is used for obtaining a plurality of videos shot by a plurality of cameras;
the character tracking unit is used for identifying characters appearing in each video based on a cross-mirror tracking technology ReiD so as to obtain behavior track information, stay time information and a complete video corresponding to the behavior track information of at least one character;
the gesture recognition unit is used for recognizing the action gesture of the figure in the complete video based on a human body action recognition technology so as to obtain action gesture information of the figure;
and the intention determining unit is used for analyzing and determining the behavior intention of the person according to the behavior track information, the stay time information and the action posture information.
With reference to the second aspect, in some optional embodiments, the video obtaining unit includes: a video acquisition subunit;
the video obtaining subunit is configured to obtain multiple videos of different areas in the same scene shot by multiple cameras.
With reference to the second aspect, in certain optional embodiments, the intention determining unit includes: a feature set analysis subunit and an intent analysis subunit;
the characteristic set analysis subunit is configured to analyze human body key points of the character according to the behavior track information, the stay time information, and the action posture information, so as to obtain a characteristic set of the character;
the intention analysis subunit is configured to perform feature comparison between the feature set and a feature set library trained in advance, so as to determine the behavioral intention of the person corresponding to the feature set, where behavioral intentions corresponding to different feature sets are established in advance in the feature set library.
In combination with the second aspect, in certain alternative embodiments, the apparatus further comprises: an alarm prompting unit;
and the alarm prompting unit is used for sending out a corresponding alarm prompt through a microphone or a display screen according to the behavior intention after analyzing and determining the behavior intention of the character according to the behavior track information, the stay time information and the action posture information.
In a third aspect, a computer-readable storage medium has stored thereon a program which, when executed by a processor, implements a behavior analysis method as in any one of the above.
In a fourth aspect, an electronic device includes at least one processor, and at least one memory, a bus, connected to the processor; the processor and the memory complete mutual communication through the bus; the processor is configured to call program instructions in the memory to perform any of the behavior analysis methods described above.
By means of the technical scheme, the behavior analysis method and the related device provided by the invention have the advantages that a plurality of videos shot by a plurality of cameras are obtained; identifying characters appearing in each video based on a cross-mirror tracking technology ReiD, so as to obtain behavior track information, stay time information and a complete video corresponding to the behavior track information of at least one character; based on a human body action recognition technology, recognizing the action posture of the figure in the complete video to obtain the action posture information of the figure; and analyzing and determining the behavior intention of the person according to the behavior track information, the stay time information and the action posture information. Therefore, the invention can obtain the behavior intention of the client by identification so as to provide corresponding automatic service for the client in a targeted manner, thereby improving the service efficiency.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart illustrating a first behavior analysis method provided by the present invention;
FIG. 2 is a flow chart illustrating a second method of behavior analysis provided by the present invention;
FIG. 3 is a flow chart illustrating a third method of behavior analysis provided by the present invention;
FIG. 4 is a flow chart illustrating a fourth method of behavior analysis provided by the present invention;
fig. 5 is a schematic structural diagram of a first behavior analysis device provided by the present invention;
fig. 6 is a schematic structural diagram of a second behavior analysis device provided by the present invention;
fig. 7 shows a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
It should be noted that the behavior analysis method and the related device provided by the invention can be used in the field of big data and the field of finance. The above description is only an example, and does not limit the application fields of the behavior analysis method and the related apparatus provided by the present invention.
The behavior analysis method and the related device provided by the invention can be used in the financial field or other fields, for example, can be used in big data application scenes in the financial field. The other fields are arbitrary fields other than the financial field, for example, the big data field. The above description is only an example, and does not limit the application fields of the behavior analysis method and the related apparatus provided by the present invention.
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
As shown in fig. 1, the present invention provides a behavior analysis method, including: s100, S200, S300, and S400;
s100, obtaining a plurality of videos shot by a plurality of cameras;
optionally, a plurality of cameras are generally installed in the current bank outlets, and the shooting area can cover each area of the bank outlets. Therefore, the invention can improve the intention of identifying the customer by using the method based on the camera installed at the bank outlet. Of course, the invention can also install and deploy a new camera to shoot and obtain a corresponding video according to actual needs.
Optionally, each camera may continuously acquire videos in different areas, and there may be overlapping portions in the areas acquired by different cameras. Therefore, the invention can obtain the videos collected by the cameras so as to be convenient for any person appearing in the videos, and the invention can identify each video through the subsequent process to obtain the complete information expressed by the corresponding person in the whole process of the video, including the action track information of the whole process, the stay time information of the whole process, the action posture information of the whole process and the like.
Optionally, the number and duration of the obtained videos are not specifically limited, and may be set according to actual needs. For example, in the case where user 1 appears in video 1 for the first 10 minutes, in video 2 for the middle 10 minutes, and in video 3 for the last 10 minutes, the present invention can obtain video 1, video 2, and video 3 to facilitate subsequent identification of the respective information of user 1.
For example, as shown in fig. 2, in combination with the embodiment shown in fig. 1, in some alternative embodiments, the S100 includes: s110;
and S110, acquiring a plurality of videos of different areas in the same scene shot by a plurality of cameras.
Optionally, the same scene may be understood as a lobby of a banking outlet, which is not limited by the present invention.
S200, identifying the persons appearing in each video based on a cross-mirror tracking technology ReiD, so as to obtain behavior track information, stay time information and a complete video corresponding to the behavior track information of at least one person;
optionally, the cross-mirror tracking technology ReID (english name: person Re-Identification, abbreviated as ReID) belongs to the known technology in the art, and the present invention does not make much description, and please refer to the related description in the art specifically. It should be noted that: the cross-lens tracking technology is the direction of computer vision research at present, and mainly solves the problem of identification and retrieval of pedestrians in cross-camera and cross-scene conditions. In recent years, although single-camera video sequence analysis techniques have advanced sufficiently, a single camera cannot cover all areas. To solve the wide area pedestrian analysis under the camera network, many research works in recent years are beginning to turn to the pedestrian re-identification problem under the multi-camera cross-vision field, and here it is implicitly assumed that the pedestrian images are captured in a similar time period, and the clothes or body shapes of the pedestrian images do not change much. The technology can recognize the pedestrian according to the wearing, posture, hair style and other information of the pedestrian.
Optionally, the behavior track information may represent a movement track of a corresponding character in the whole scene; the lingering time information may characterize the lingering time of the respective character in the respective region throughout the scene.
Optionally, the complete video may be obtained by splicing a plurality of video screens, and the complete process of the corresponding character in the whole scene is recorded in the video, so that the action posture information of the corresponding character is conveniently identified and obtained in the complete video.
S300, recognizing the action posture of the character in the complete video based on a human body action recognition technology, so as to obtain the action posture information of the character;
optionally, the human body motion recognition technology belongs to the known technology in the field, and the present invention does not make much description, and please refer to the related description in the field specifically. It should be noted that: given an image or a video, the behavior and the action of a person in the image or the video can be analyzed and judged by a human body action recognition technology, and the method can be applied to intelligent video monitoring.
Optionally, the action posture information may represent which actions the corresponding character has performed in sequence and the duration of the performed actions, and the like, in the whole scene. The invention can preset different action classifications, such as kicking, embracing head, clenching, stomping, jumping, squatting, probing, lifting fist, stepping and east Zhang xi Wang, so as to identify which actions appear in sequence according to time sequence in the complete video, thereby obtaining corresponding action posture information.
S400, analyzing and determining the behavior intention of the person according to the behavior track information, the stay time information and the action posture information.
Optionally, as described above, different information may represent different states of the user, and by integrating the information for analysis, the behavior intention of the corresponding person may be obtained, so as to provide corresponding service or measure to the corresponding person.
Optionally, the present invention is not limited to the process of analyzing the behavioral intention of the person. For example, as shown in fig. 3, in combination with the embodiment shown in fig. 1, in some alternative embodiments, the S400 includes: s410 and S420;
s410, analyzing human key points of the character according to the behavior track information, the stay time information and the action posture information, and thus obtaining a characteristic set of the character;
optionally, the key points of the human body belong to concepts known in the art, and the present invention does not describe the key points much, and please refer to the related description in the art specifically. It should be noted that: by analyzing the key points of the human body, a character feature set of the human body is determined, wherein the character feature set comprises a track feature, a time feature, a posture feature and the like.
S420, comparing the features of the feature set with a pre-trained feature set library to determine the behavior intention of the person corresponding to the feature set, wherein the behavior intents corresponding to different feature sets are pre-established in the feature set library.
Optionally, the present invention may calculate behavior intents corresponding to different feature sets in advance in a big data calculation manner, and then store the different feature sets and the corresponding behavior intents in the feature set library correspondingly, so as to perform feature comparison directly according to the feature set library when necessary.
Optionally, in order to improve the accuracy of the present invention, the present invention may also monitor the real intention of the corresponding person. That is, the actual intention of the corresponding character is determined by the finally completed behavior of the corresponding character. Then, the invention can compare the real intention with the behavior intention determined by the characteristic comparison, if the real intention is not consistent with the behavior intention determined by the characteristic comparison, the real intention and the characteristic set of the person are correspondingly stored in the characteristic set library, so that the characteristic set library is updated.
As shown in fig. 4, in combination with the embodiment shown in fig. 1, in some optional embodiments, after S400, the method further includes: s500;
and S500, sending out corresponding alarm prompts through a microphone or a display screen according to the behavior intention.
Optionally, the invention can set different treatment modes for different behavioral intentions. For example, for a scene of using an ATM machine in a queue, when the invention recognizes behavioral intentions such as too close distance or line crossing and the like for people in the queue, the invention can remind the corresponding people to keep distance through a microphone, and the line crossing is not accurate.
For example, if the invention identifies that a person has a violent action intention, the invention can remind the corresponding worker to handle the violent action intention by the microphone, and can also remind the corresponding worker to handle the violent action intention by sending an alarm on the real screen.
As shown in fig. 5, the present invention provides a behavior analysis device including: a video obtaining unit 100, a person tracking unit 200, a posture identifying unit 300, and an intention determining unit 400;
the video obtaining unit 100 is configured to obtain a plurality of videos captured by a plurality of cameras;
the person tracking unit 200 is configured to identify persons appearing in each of the videos based on a cross-mirror tracking technology ReID, so as to obtain behavior track information, stay time information, and a complete video corresponding to the behavior track information of at least one of the persons;
the gesture recognition unit 300 is configured to recognize the motion gesture of the person in the complete video based on a human body motion recognition technology, so as to obtain motion gesture information of the person;
the intention determining unit 400 is configured to analyze and determine the behavior intention of the person according to the behavior trajectory information, the stay time information, and the motion posture information.
Optionally, with reference to the embodiment shown in fig. 5, in some optional embodiments, the video obtaining unit 100 includes: a video acquisition subunit;
the video obtaining subunit is configured to obtain multiple videos of different areas in the same scene shot by multiple cameras.
Optionally, in combination with the embodiment shown in fig. 5, in some optional embodiments, the intention determining unit 400 includes: a feature set analysis subunit and an intent analysis subunit;
the characteristic set analysis subunit is configured to analyze human key points of the person according to the behavior trajectory information, the stay time information, and the motion posture information, so as to obtain a characteristic set of the person;
the intention analysis subunit is configured to perform feature comparison between the feature set and a feature set library trained in advance, so as to determine the behavioral intention of the person corresponding to the feature set, where behavioral intentions corresponding to different feature sets are established in advance in the feature set library.
As shown in fig. 6, in combination with the embodiment shown in fig. 5, in some alternative embodiments, the apparatus further comprises: an alarm prompt unit 500;
the warning prompt unit 500 is configured to send a corresponding warning prompt through a microphone or a display screen according to the behavior intention after analyzing and determining the behavior intention of the person according to the behavior trajectory information, the stay time information, and the motion posture information.
The present invention provides a computer-readable storage medium on which a program is stored, the program implementing any of the above behavior analysis methods when executed by a processor.
As shown in fig. 7, the present invention provides an electronic device 70, wherein the electronic device 70 comprises at least one processor 701, at least one memory 702 connected to the processor 701, and a bus 703; the processor 701 and the memory 702 complete communication with each other through the bus 703; the processor 701 is configured to call the program instructions in the memory 702 to execute any one of the behavior analysis methods described above.
In the present invention, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the system embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (10)

1. A method of behavioral analysis, comprising:
obtaining a plurality of videos shot by a plurality of cameras;
identifying the persons appearing in each video based on a cross-mirror tracking technology ReiD, so as to obtain behavior track information, stay time information and a complete video corresponding to the behavior track information of at least one person;
based on a human body action recognition technology, recognizing the action posture of the figure in the complete video to obtain action posture information of the figure;
and analyzing and determining the behavior intention of the character according to the behavior track information, the stay time information and the action posture information.
2. The method of claim 1, wherein the obtaining a plurality of videos captured by a plurality of cameras comprises:
and obtaining a plurality of videos of different areas in the same scene shot by a plurality of cameras.
3. The method of claim 1, wherein analyzing and determining the behavioral intent of the person based on the behavioral track information, the dwell time information, and the motion gesture information comprises:
analyzing human key points of the character according to the behavior track information, the stay time information and the action posture information, thereby obtaining a characteristic set of the character;
and comparing the characteristics of the characteristic set with a pre-trained characteristic set library to determine the behavior intention of the person corresponding to the characteristic set, wherein the behavior intentions corresponding to different characteristic sets are pre-established in the characteristic set library.
4. The method of claim 1, wherein after the analyzing and determining the behavioral intent of the person based on the behavioral trajectory information, the dwell time information, and the motion gesture information, the method further comprises:
and sending out a corresponding alarm prompt through a microphone or a display screen according to the behavior intention.
5. A behavior analysis device, comprising: a video obtaining unit, a person tracking unit, a posture identifying unit and an intention determining unit;
the video obtaining unit is used for obtaining a plurality of videos shot by a plurality of cameras;
the character tracking unit is used for identifying characters appearing in each video based on a cross-mirror tracking technology ReiD, so that behavior track information, stay time information and a complete video corresponding to the behavior track information of at least one character are obtained;
the gesture recognition unit is used for recognizing the action gesture of the figure in the complete video based on a human body action recognition technology so as to obtain action gesture information of the figure;
and the intention determining unit is used for analyzing and determining the behavior intention of the person according to the behavior track information, the stay time information and the action posture information.
6. The apparatus of claim 5, wherein the video obtaining unit comprises: a video acquisition subunit;
the video obtaining subunit is configured to obtain multiple videos of different areas in the same scene shot by multiple cameras.
7. The apparatus of claim 5, wherein the intent determination unit comprises: a feature set analysis subunit and an intent analysis subunit;
the characteristic set analysis subunit is configured to analyze human key points of the person according to the behavior trajectory information, the stay time information, and the motion posture information, so as to obtain a characteristic set of the person;
the intention analysis subunit is configured to perform feature comparison between the feature set and a feature set library trained in advance, so as to determine the behavioral intention of the person corresponding to the feature set, where behavioral intentions corresponding to different feature sets are established in advance in the feature set library.
8. The apparatus of claim 5, further comprising: an alarm prompting unit;
and the alarm prompting unit is used for sending out a corresponding alarm prompt through a microphone or a display screen according to the behavior intention after analyzing and determining the behavior intention of the character according to the behavior track information, the stay time information and the action posture information.
9. A computer-readable storage medium on which a program is stored, the program implementing the behavior analysis method according to any one of claims 1 to 4 when executed by a processor.
10. An electronic device comprising at least one processor, and at least one memory, bus connected to the processor; the processor and the memory complete mutual communication through the bus; the processor is configured to invoke program instructions in the memory to perform the behavior analysis method of any of claims 1 to 4.
CN202211064835.4A 2022-09-01 2022-09-01 Behavior analysis method and related device Pending CN115424345A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211064835.4A CN115424345A (en) 2022-09-01 2022-09-01 Behavior analysis method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211064835.4A CN115424345A (en) 2022-09-01 2022-09-01 Behavior analysis method and related device

Publications (1)

Publication Number Publication Date
CN115424345A true CN115424345A (en) 2022-12-02

Family

ID=84201600

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211064835.4A Pending CN115424345A (en) 2022-09-01 2022-09-01 Behavior analysis method and related device

Country Status (1)

Country Link
CN (1) CN115424345A (en)

Similar Documents

Publication Publication Date Title
US10445563B2 (en) Time-in-store estimation using facial recognition
JP5008269B2 (en) Information processing apparatus and information processing method
JP6013241B2 (en) Person recognition apparatus and method
US8965061B2 (en) Person retrieval apparatus
KR100886557B1 (en) System and method for face recognition based on adaptive learning
CN101027678B (en) Single image based multi-biometric system and method
JP5500303B1 (en) MONITORING SYSTEM, MONITORING METHOD, MONITORING PROGRAM, AND RECORDING MEDIUM CONTAINING THE PROGRAM
WO2018180588A1 (en) Facial image matching system and facial image search system
CN109829381A (en) A kind of dog only identifies management method, device, system and storage medium
CN103714631B (en) ATM cash dispenser intelligent monitor system based on recognition of face
WO2011102416A1 (en) Moving object tracking system and moving object tracking method
CN109657624A (en) Monitoring method, the device and system of target object
Arigbabu et al. Integration of multiple soft biometrics for human identification
WO2005091211A1 (en) Interactive system for recognition analysis of multiple streams of video
JP2008243093A (en) Dictionary data registration device and method
CN103049459A (en) Feature recognition based quick video retrieval method
CN111814510B (en) Method and device for detecting legacy host
KR101957677B1 (en) System for learning based real time guidance through face recognition and the method thereof
Haji et al. Real time face recognition system (RTFRS)
JP4862518B2 (en) Face registration device, face authentication device, and face registration method
CN115205581A (en) Fishing detection method, fishing detection device and computer readable storage medium
CN114067403A (en) Old people self-help money depositing and withdrawing obstacle recognition method and device
JP2022003526A (en) Information processor, detection system, method for processing information, and program
JP5552946B2 (en) Face image sample collection device, face image sample collection method, program
CN115424345A (en) Behavior analysis method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination