CN111639213A - Abnormal behavior identification method and device - Google Patents

Abnormal behavior identification method and device Download PDF

Info

Publication number
CN111639213A
CN111639213A CN202010467702.6A CN202010467702A CN111639213A CN 111639213 A CN111639213 A CN 111639213A CN 202010467702 A CN202010467702 A CN 202010467702A CN 111639213 A CN111639213 A CN 111639213A
Authority
CN
China
Prior art keywords
user description
description information
information
historical
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010467702.6A
Other languages
Chinese (zh)
Other versions
CN111639213B (en
Inventor
郭文文
吕昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yitu Information Technology Co ltd
Original Assignee
Shanghai Yitu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yitu Information Technology Co ltd filed Critical Shanghai Yitu Information Technology Co ltd
Priority to CN202010467702.6A priority Critical patent/CN111639213B/en
Publication of CN111639213A publication Critical patent/CN111639213A/en
Application granted granted Critical
Publication of CN111639213B publication Critical patent/CN111639213B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/907Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/907Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/909Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B31/00Predictive alarm systems characterised by extrapolation or other computation using updated historic data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Library & Information Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • Business, Economics & Management (AREA)
  • Computing Systems (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for identifying abnormal behavior, which are used to identify abnormal behavior timely and effectively, and the method includes: the method comprises the steps of obtaining a face image of a target person at a target place, generating corresponding user description information based on a face recognition result and consultation information triggered by the target person, screening out reference user description information meeting a similarity condition from prestored historical data, determining the number of different historical target places corresponding to the obtained reference user description information and the generation time of each reference user description information, determining that the target person has abnormal behaviors when the corresponding conditions are met, and giving an alarm. Therefore, when the target person appears in the target place, whether the target person has abnormal behaviors or not can be quickly and accurately identified according to the historical data, and therefore warning is timely conducted.

Description

Abnormal behavior identification method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for identifying an abnormal behavior.
Background
In online retail business, phenomena of malicious cattle jeopardizing commodity price, malicious cattle reducing commodity price, stealing property by thieves and the like generally exist, and obviously, the phenomena can disturb market economy, cause destructive attack of industry and further cause social order disorder. However, at present, corresponding remedial measures can be taken only after a phenomenon occurs, and abnormal behaviors such as lifting commodity price, pressing commodity price, theft and the like cannot be identified timely and effectively.
It follows that a new solution needs to be devised to overcome the above drawbacks.
Disclosure of Invention
The disclosure provides an abnormal behavior identification method and device, which are used for identifying abnormal behaviors timely and effectively.
The specific technical scheme provided by the embodiment of the disclosure is as follows:
in a first aspect, a method for identifying abnormal behavior includes:
acquiring a face image of a target person at a target location, generating a corresponding face recognition result by adopting a preset image recognition algorithm, and generating corresponding user description information corresponding to the target person on the basis of the face recognition result and consultation information triggered by the target person;
acquiring historical user description information generated at each historical target location other than the target location from pre-stored historical data, respectively comparing the user description information with the similarity of the historical user description information, and screening out the historical user description information of which the similarity with the user description information reaches a preset similarity threshold value as reference user description information;
and determining that the number of different historical target places corresponding to each obtained reference user description information reaches a preset first number threshold value, and determining that the target person has abnormal behavior and giving an alarm when the generation time of each reference user description information meets a preset time range.
Optionally, the comparing the similarity between the user description information and each historical user description information, and screening out the historical user description information whose similarity with the user description information reaches a preset similarity threshold value as reference user description information specifically includes:
comparing the face feature recognition result with each historical face recognition result contained in each historical user description information respectively, and screening out the historical face recognition result of which the similarity with the face feature recognition result reaches a preset first face similarity threshold value as a reference face recognition result;
determining each historical consultation information corresponding to each obtained reference face recognition result, respectively comparing the similarity of the consultation information with each historical consultation information, and screening out the historical consultation information of which the similarity with the consultation information reaches a preset information similarity threshold value as reference consultation information;
and using the obtained reference consultation information and the corresponding reference face recognition result as the reference user description information.
Optionally, the user description information further includes non-target person information, where the non-target person information represents other persons existing in the target location, whose distance from the target person is smaller than a preset distance threshold and whose appearance time with the target person is greater than a preset time threshold, and then the historical user description information whose similarity with the user description information reaches a preset similarity threshold is screened out as reference user description information, and the method further includes:
if not, acquiring historical non-target character information corresponding to each reference face recognition result;
and when the number of the historical non-target person information reaches a preset second number threshold value, taking the historical non-target person information and corresponding reference face recognition results as reference user description information.
Optionally, determining the generation time of each piece of reference user description information to meet a preset time range specifically includes:
determining reference time based on the generation time of the user description information and a preset time range;
sequencing the reference user description information according to respective generation time to generate a sequencing result, and determining the reference user description information with the earliest generation time based on the sequencing result;
and determining the generation time of the reference user description information, and determining the generation time of each reference user description information when the generation time is later than the reference time, so as to meet a preset time range.
Optionally, determining that the target person has an abnormal behavior, and performing an alarm, specifically including:
and determining that the target person has abnormal behaviors, and sending alarm indication information at least comprising the face image to a plurality of preset contacts in a preset information sending mode, wherein the plurality of contacts at least comprise working personnel located at the target place.
In a second aspect, an apparatus for identifying abnormal behavior includes:
the system comprises an identification unit, a query unit and a processing unit, wherein the identification unit is used for acquiring a face image of a target person at a target place, generating a corresponding face identification result by adopting a preset image identification algorithm, and generating corresponding user description information corresponding to the target person based on the face identification result and consultation information triggered by the target person;
the screening unit is used for acquiring historical user description information generated at each historical target location other than the target location from pre-stored historical data, comparing the similarity of the user description information with the historical user description information respectively, and screening the historical user description information of which the similarity with the user description information reaches a preset similarity threshold value as reference user description information;
and the alarm unit is used for determining that the number of different historical target places corresponding to the obtained reference user description information reaches a preset first number threshold value, and determining that the target character has abnormal behavior and giving an alarm when the generation time of each reference user description information meets a preset time range.
Optionally, the user description information is respectively compared with the similarity of each historical user description information, and the historical user description information whose similarity with the user description information reaches a preset similarity threshold is screened out, where the screening unit is specifically configured to:
comparing the face feature recognition result with each historical face recognition result contained in each historical user description information respectively, and screening out the historical face recognition result of which the similarity with the face feature recognition result reaches a preset first face similarity threshold value as a reference face recognition result;
determining each historical consultation information corresponding to each obtained reference face recognition result, respectively comparing the similarity of the consultation information with each historical consultation information, and screening out the historical consultation information of which the similarity with the consultation information reaches a preset information similarity threshold value as reference consultation information;
and using the obtained reference consultation information and the corresponding reference face recognition result as the reference user description information.
Optionally, the user description information further includes non-target person information, where the non-target person information represents other persons existing in the target location and having a distance to the target person smaller than a preset distance threshold and having a time to appear at the same time as the target person larger than a preset time threshold, and then the history user description information whose similarity to the user description information reaches the preset similarity threshold is screened out, and when the history user description information is used as the reference user description information, the screening unit is further configured to:
if not, acquiring historical non-target character information corresponding to each reference face recognition result;
and when the number of the historical non-target person information reaches a preset second number threshold value, taking the historical non-target person information and corresponding reference face recognition results as reference user description information.
Optionally, the generation time of each piece of reference user description information is determined, and when a preset time range is met, the alarm unit is specifically configured to:
determining reference time based on the generation time of the user description information and a preset time range;
sequencing the reference user description information according to respective generation time to generate a sequencing result, and determining the reference user description information with the earliest generation time based on the sequencing result;
and determining the generation time of the reference user description information, and determining the generation time of each reference user description information when the generation time is later than the reference time, so as to meet a preset time range.
Optionally, when determining that the target person has an abnormal behavior and performing an alarm, the alarm unit is specifically configured to:
and determining that the target person has abnormal behaviors, and sending alarm indication information at least comprising the face image to a plurality of preset contacts in a preset information sending mode, wherein the plurality of contacts at least comprise working personnel located at the target place.
In a third aspect, an apparatus for identifying abnormal behavior includes:
a memory for storing executable instructions;
a processor, configured to read and execute the executable instructions stored in the memory, so as to implement the method for identifying abnormal behavior according to any one of the above first aspects.
In a fourth aspect, a storage medium, wherein instructions are executed by a processor, so that the processor can execute the method for identifying abnormal behavior of the first aspect.
In summary, in the embodiment of the present disclosure, a face image of a target person is obtained at a target location, corresponding user description information is generated based on a face recognition result and consultation information triggered by the target person, then, reference user description information meeting a similarity condition is screened out from pre-stored historical data, then, the number of different historical target locations corresponding to each obtained reference user description information and the generation time of each reference user description information are determined, and when a corresponding condition is met, it is determined that an abnormal behavior exists in the target person, and an alarm is given. Therefore, when the target person appears in the target place, whether the target person has abnormal behaviors or not can be quickly and accurately identified according to the historical data, and therefore warning is timely conducted.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flow chart of a method for identifying abnormal behavior provided in an embodiment of the present disclosure;
fig. 2 is a schematic physical structure diagram of an abnormal behavior recognition apparatus provided in an embodiment of the present disclosure;
fig. 3 is a schematic logic structure diagram of an abnormal behavior recognition apparatus provided in an embodiment of the present disclosure.
Detailed Description
In the online retail business, in order to improve the identification efficiency and accuracy of abnormal behaviors, in the embodiment of the disclosure, after a face image of a target person is acquired at a target location, corresponding user description information is generated and compared with pre-stored historical data, each piece of reference user description information meeting preset conditions is screened out, then, the abnormal behavior of the target person is determined according to the number of different historical target locations corresponding to each piece of reference user description information and the generation time of each piece of reference user description information, and an alarm is given.
It should be noted that, in the embodiment of the present disclosure, the abnormal behavior includes, but is not limited to, a suspected malicious price competition, a suspected theft, and the like.
Preferred embodiments of the present disclosure will be described in further detail with reference to the accompanying drawings.
It should be noted that, in the embodiment of the present disclosure, in the preparation stage, for each target person appearing in each preset target location, the following operations are performed:
the method comprises the steps of obtaining a face image of each target figure, generating a corresponding face recognition result by adopting a preset image recognition algorithm, and generating and storing corresponding user description information corresponding to each target figure on the basis of the face recognition result, consultation information triggered by each target figure and non-target figure information corresponding to each target figure.
In the embodiment of the present disclosure, the non-target person information represents other persons existing in one target location, whose distance from the target person is smaller than a preset distance threshold and whose appearance time with the target person is greater than a preset time threshold.
For example, assuming that a target person a appears in the jewelry store 1, a preset distance threshold is 2 meters, and a time threshold is 10 minutes, at this time, a face image a of the target person a is obtained, and a k-means clustering algorithm (k-means clustering algorithm) is adopted to generate a corresponding face recognition result a, where the face recognition result a represents that the target person a is zhang san, and then, based on the face recognition result a, consultation information a triggered by zhang san and non-target person information a corresponding to the target person a, where the consultation information a represents that zhang san consults a diamond necklace 1, a diamond necklace 2 and a jade necklace 1, the non-target person information a represents that a distance between zhang san existing in the jewelry store 1 is less than 2 meters, and corresponding zhang san with zhang san occurring time greater than 10 minutes simultaneously with zhang san generates and stores corresponding user description information a, the user description information A comprises a face recognition result A of Zhang III, consultation information A and non-target character information A.
Thus, a large amount of history data is obtained over a period of time, and thus, when a target person appears in a target place, it is possible to identify whether or not there is an abnormal behavior of the target user based on the history data.
Referring to fig. 1, in the embodiment of the present disclosure, the identification process of the abnormal behavior is as follows:
step S101: the method comprises the steps of obtaining a face image of a target person at a target place, generating a corresponding face recognition result by adopting a preset image recognition algorithm, and generating corresponding user description information corresponding to the target person based on the face recognition result and consultation information triggered by the target person.
For example, assuming that the preset image recognition algorithm is a k-means algorithm, the face image 1 of the target person 1 is obtained at the jewelry store 4, and the k-means algorithm is adopted to generate a corresponding face recognition result 1, wherein the face recognition result 1 represents that the target person 1 is Zhang III, and corresponding user description information 1 is generated corresponding to the target person 1 based on the face recognition result 1 and the consultation information 1 triggered by the target person 1, wherein the consultation information 1 triggered by the target person 1 represents that the target person 1 consults a diamond necklace 1 and a diamond necklace 2.
Step S102: and obtaining each historical user description information generated at each historical target location of the non-target location from the pre-stored historical data, respectively comparing the similarity of the user description information with each historical user description information, and screening out the historical user description information of which the similarity with the user description information reaches a preset similarity threshold value as reference user description information.
Specifically, the historical user description information generated at each historical target location other than the target location is acquired from the pre-stored historical data.
For example, historical user descriptive information a, historical user descriptive information B, and historical user descriptive information C generated at the jewelry store 1, the jewelry store 2, and the jewelry store 3 are acquired from the pre-stored historical data.
After acquiring the respective historical user descriptors generated at the respective historical target locations other than the target location, determining the reference user descriptors by, but not limited to:
and A1, respectively comparing the similarity of the face feature recognition result with each historical face recognition result contained in each historical user description information, and screening out the historical face recognition result of which the similarity with the face feature recognition result reaches a preset first face similarity threshold value as a reference face recognition result.
For example, similarity comparison is performed between the face feature recognition result 1 and historical face recognition results a, historical face recognition results B and historical face recognition results C contained in the historical user description information a, historical user description information B and historical user description information C, wherein the historical face recognition results a, historical face recognition results B and historical face recognition results C respectively represent zhangsan, zhangsan and lie, and on the assumption that the similarity between the face feature recognition result 1 and the historical face recognition results a, historical face recognition results B and historical face recognition results C is respectively 80%, 90% and 60%, the preset first face similarity threshold value is 80%, the historical face recognition results a and historical face recognition results B with the similarity of 80% to the face feature recognition result 1 are screened out as the reference face recognition result 1, The face recognition result 2 is referred to.
A2, determining each historical consultation information corresponding to each obtained reference face recognition result, comparing the similarity of the consultation information with each historical consultation information, and screening out the historical consultation information of which the similarity with the consultation information reaches a preset information similarity threshold value as the reference consultation information.
For example, assuming that the preset information similarity threshold value is 60%, determining historical consultation information a and historical consultation information B corresponding to the obtained reference face recognition result 1 and the reference face recognition result 2, wherein the consultation information a represents Zhang Sanqi consultation diamond necklace 1, diamond necklace 2 and jade necklace 1, and the consultation information B represents Zhang Sanqi consultation diamond necklace 1 and diamond necklace 2, and comparing the similarity of the consultation information 1 with the historical consultation information a and the historical consultation information B respectively, assuming that the similarities of the consultation information 1 with the historical consultation information a and the historical consultation information B are 67% and 100%, respectively, and screening out the historical consultation information a and the historical consultation information B with the similarity of 80% with the consultation information as the reference consultation information 1 and the reference consultation information 2.
And A3, using the obtained reference consultation information and the corresponding reference face recognition result as the reference user description information.
For example, the obtained reference counseling information 1, reference counseling information 2 and corresponding reference face recognition result 1, reference face recognition result 2 are used as the reference user description information 1, reference user description information 2.
It should be noted that, in the embodiment of the present disclosure, if the user description information further includes non-target person information, if the historical consulting information whose similarity to the consulting information reaches the preset information similarity threshold is not screened out when step a2 is executed, each piece of historical non-target person information corresponding to each reference face recognition result is obtained.
In the embodiment of the disclosure, the non-target person information represents other persons existing in the target location, whose distance from the target person is smaller than a preset distance threshold value and whose appearance time with the target person is longer than a preset time threshold value.
For example, if the historical advisory information with the similarity to the advisory information 1 reaching the preset information similarity threshold value is not screened out when step a2 is executed, the historical non-target personal information a and the historical non-target personal information B corresponding to the reference face recognition result 1 and the reference face recognition result 2 are obtained, wherein the historical non-target personal information a represents one and the historical non-target personal information B represents two.
Further, when the number of each piece of historical non-target character information is determined to reach a preset second number threshold value, each piece of historical non-target character information and each corresponding reference face recognition result are used as reference user description information.
For example, assuming that the preset second number threshold is 2, when it is determined that the number of the historical non-target personal information a and the historical non-target personal information B reaches 2, the historical non-target personal information a, the historical non-target personal information B, and the corresponding reference face recognition result 1 and the reference face recognition result 2 are used as the reference user description information 1 and the reference user description information 2.
Step S103: and determining that the number of different historical target places corresponding to the obtained reference user description information reaches a preset first number threshold value, and determining that the target person has abnormal behavior and giving an alarm when the generation time of each reference user description information meets a preset time range.
Specifically, it is determined that the number of different historical target sites corresponding to each obtained reference user description information reaches a preset first number threshold.
For example, it is assumed that the preset first number threshold value is 2, the reference user description information 1 corresponds to a jewelry store 1, the reference user description information 2 corresponds to a jewelry store 2, and the number of different history target locations corresponding to each obtained reference user description information is determined to reach 2.
In the embodiment of the present disclosure, it may be determined that the generation time of each piece of reference user description information satisfies a preset time range by using, but not limited to, the following manners:
and B1, determining the reference time based on the generation time of the user description information and a preset time range.
For example, assuming that the generation time of the user description information 1 is 2019, 5, month and 1, and the preset time range is one month, the reference time is determined to be 2019, 4, month and 1 based on the generation time of the user description information 1 and the preset time range.
And B2, sequencing the reference user description information according to the respective generation time to generate a sequencing result, and determining the reference user description information with the earliest generation time based on the sequencing result.
For example, suppose that the generation time of the reference user description information 1 is 2019, 4 and 2 days, and the generation time of the reference user description information 2 is 2019, 4 and 30 days, the reference user description information 1 and the reference user description information 2 are sorted according to their respective generation times to generate a sorting result, and based on the sorting result, the reference user description information with the earliest generation time is determined as the reference user description information 1.
And B3, determining the generation time of one piece of reference user description information, and determining the generation time of each piece of reference user description information when the generation time is later than the reference time, wherein the generation time meets a preset time range.
For example, when the generation time of the reference user description information 1 is determined to be 2019, 4 and 2 days, and is later than 2019, 4 and 1 days, the respective generation times of the reference user description information 1 and the reference user description information 2 are determined to meet a preset time range.
And when the number of different historical target places corresponding to the description information of each reference user reaches a preset first number threshold value and the generation time of the description information of each reference user meets a preset time range, determining that the target person has suspected malicious price competition or suspected theft, and sending alarm indication information at least containing the face image to a plurality of preset contacts by adopting a preset information sending mode.
In the embodiment of the disclosure, the plurality of contacts at least comprise staff located at the target location.
For example, it is determined that a target person 1 has a suspected malicious price competition behavior, and it is seen that the suspected malicious price competition behavior belongs to an abnormal behavior, at this time, the warning indication information 1 containing the face image 1 is sent to the contact 1, the contact 2, and the contact 3 in a short message manner, where the contact 1, the contact 2, and the contact 3 are staff located in the jewelry store 4.
For another example, it is determined that the target person 1 has a suspected malicious price competition behavior, and it is found that the suspected malicious price competition behavior belongs to an abnormal behavior, and at this time, the alert indication information 1 including the face image 1 and the advisory information 1 is sent to the contact 4, the contact 5, and the contact 6 in an email manner, where the contact 4 is a worker located in the jewelry store 4, the contact 5 is a worker located in the jewelry store 1, and the contact 6 is a worker located in the jewelry store 2.
Next, based on the above disclosed embodiments, the present disclosure will be further explained by taking two specific scenarios as examples.
Scene one:
number 2 of 3.2019, namely, lee four appears in a 4S shop 1 of an automobile, a face image 1 of lee four is obtained, a k-means algorithm is adopted to generate a face recognition result 1 of lee four, and then corresponding user description information 1 is generated and stored corresponding to lee four based on the face recognition result 1 and the consulting information 1 of the automobile 1 representing lee four consulting automobile brand A, the automobile 1 of automobile brand B and the automobile 1 of automobile brand C triggered by lee four, wherein the user description information 1 comprises the face recognition result 1 and the consulting information 1 of lee four, and the user description information 1, the face recognition result 1 and the consulting information 1 are respectively called historical user description information A, historical face recognition result A and historical consulting information A.
Number 3 in 2019, where the fourth lie appears in an automobile 4S store 2, a face image 2 of the fourth lie is obtained, a k-means algorithm is adopted to generate a face recognition result 2 of the fourth lie, and then, based on the face recognition result 2, consultation information 2 which is triggered by the fourth lie and characterizes an automobile 1 of the fourth lie consulted automobile brand a, an automobile 2 of the automobile brand B, and an automobile 1 of the automobile brand B, corresponding user description information 2 is generated and stored corresponding to the fourth lie, wherein the user description information 1 comprises the face recognition result 2 of the fourth lie and the consultation information 2, and the user description information 2, the face recognition result 2, and the consultation information 2 are respectively referred to as historical user description information B, historical face recognition result B, and historical consultation information B hereinafter.
Number 5 in 2019, 3.4.4.3, and taking a face image 3 of the fourth plum, generating a face recognition result 3 of the fourth plum by using a k-means algorithm, and generating and storing corresponding user description information 3 corresponding to the fourth plum based on the face recognition result 3 and consultation information 3, triggered by the fourth plum, representing the automobile 1 of the fourth plum consultation automobile brand a, wherein the user description information 3 comprises the face recognition result 3 of the fourth plum and the consultation information 3, and the user description information 3, the face recognition result 3 and the consultation information 3 are respectively referred to as historical user description information C, historical face recognition result C and historical consultation information C hereinafter.
No. 7 in 2019, if Liqu appears in an automobile 4S store 4, obtaining a face image of the Liqu, generating a corresponding face recognition result 4 by adopting a k-means algorithm, generating corresponding user description information 4 corresponding to a target person based on the face recognition result 4 and the consultation information 4 triggered by the target person, wherein the consultation information 4 represents an automobile 1 of the Liqu consultation automobile brand A and an automobile 1 of the automobile brand B, and then obtaining historical user description information A, historical user description information B and historical user description information C, which are generated by the Liqu in the automobile 4S store 1, the automobile 4S store 2 and the automobile 4S store 3, from pre-stored historical data.
Then, similarity comparison is carried out on the face feature recognition result 4 and historical face recognition results A, historical face recognition results B and historical face recognition results C contained in the historical user description information A, the historical user description information B and the historical user description information C respectively, and on the assumption that the similarity of the face feature recognition result 4 and the historical face recognition results A, the historical face recognition results B and the historical face recognition results C is respectively 95%, 96% and 90%, the preset first face similarity threshold value is 80%, and the historical face recognition result A, the historical face recognition result B and the historical face recognition result C with the similarity of 80% to the face feature recognition result 4 are screened out and used as a reference face recognition result 1, a reference face recognition result 2 and a reference face recognition result 3.
Then, supposing that the preset information similarity threshold value is 60%, determining historical consultation information A, historical consultation information B and historical consultation information C corresponding to the obtained reference face recognition result 1, reference face recognition result 2 and reference face recognition result 3, comparing the similarity of the obtained consultation information 4 with the historical consultation information A, the historical consultation information B and the historical consultation information C respectively, supposing that the similarity of the consultation information 1 with the historical consultation information A, the historical consultation information B and the historical consultation information C is 67%, 67% and 50%, respectively, screening out the historical consultation information A and the historical consultation information C with the similarity of 60% with the consultation information 4 as reference consultation information 1 and reference consultation information 2, and taking the obtained reference consultation information 1, reference consultation information 2 and the corresponding reference face recognition result 1, historical consultation information C, The face recognition result 2 is referred to as reference user description information 1 and reference user description information 2.
Then, the preset first number threshold value is 2, the reference user description information 1 corresponds to an automobile 4S shop 1, the reference user description information 2 corresponds to the automobile 4S shop 1, the number of different historical target places corresponding to each obtained reference user description information is determined to be 2, meanwhile, the number of the generation time 2019, 3 and 7 of the user description information 4 and the preset time range representing the week are determined, the reference time is 2019, 3 and 1, the reference user description information 1 and the reference user description information 2 are sorted according to the respective generation time to generate a sorting result, based on the sorting result, the reference user description information with the earliest generation time is determined to be the reference user description information 1, the generation time 2019, 3 and 2 days of the reference user description information 1 is determined, and when the generation time is later than 2019, 3 and 1 days of the reference user description information 1, the reference user description information 1 is determined, The respective generation times of the user description information 2 are referred to, and a preset time range is satisfied.
At this time, it is determined that there is a suspected theft behavior of lie four, and it is seen that the suspected theft behavior belongs to an abnormal behavior, at this time, the warning indication information 1 including the face image 4 and the consultation information 4 is sent to the contact 1, the contact 2 and the contact 3 in a mail manner, where the contact 1 is a worker located in the automobile 4S shop 4, the contact 2 is a worker located in the automobile 4S shop 1, and the contact 3 is a worker located in the automobile 4S shop 3.
Scene two:
2019, 6 and 4, wang wu appears in a clothing store 1, meanwhile, a wang yi with the distance between wang wu and the appearance time of wang wu is longer than 10 minutes exists in the clothing store 1, a face image 1 of wang wu is obtained, a k-means algorithm is adopted to generate a face recognition result 1 of wang wu, and then, based on the face recognition result 1, consultation information 1 which is used for representing wang wu five consultation clothing 1 and non-target person information 1 which is used for representing wang yi, corresponding user description information 1 is generated and stored corresponding to wang wu, wherein the user description information 1 comprises the face recognition result 1 of wang wu, the consultation information 1 and the non-target person information 1, and the user description information 1, the face recognition result 1, the consultation information 1 and the non-target person information 1 are respectively called historical user description information A, historical face recognition result A, Historical consultation information A and historical non-target person information A.
2019, 6 and 5, wangwu appears in a clothing store 1, meanwhile, a wangwu image 2 which is less than 2 meters away from wangwu exists in the clothing store 1, and the simultaneous occurrence time of wangwu and wangwu is more than 10 minutes is obtained, a k-means algorithm is adopted to generate a wangwu face recognition result 2, then, based on the face recognition result 2, consultation information 2 which is used for representing the wangwu consultation clothing 2 and is triggered by wangwu and non-target person information 2 which is used for representing wangwu, corresponding user description information 2 is generated and stored corresponding to wangwu, wherein the user description information 2 comprises the wangwu face recognition result 2, the consultation information 2 and the non-target person information 2, and the user description information 2, the face recognition result 2, the consultation information 2 and the non-target person information 2 are respectively called historical user description information B, historical face recognition result B, historical face, Historical consultation information B and historical non-target person information B.
6, Wang appears in a clothing store 3 in 2019, meanwhile, a Wang III with the distance between the Wang III and the Wang III being less than 2 meters and the appearance time of the Wang III being more than 10 minutes exists in the clothing store 1, a face image of the Li IV is obtained, a corresponding face recognition result 3 is generated by adopting a k-means algorithm, and based on the face recognition result 3 and consultation information 3 which is triggered by the Wang V and used for representing the Wang V consultation clothing 3 and non-target character information 3 for representing the Wang III, corresponding user description information 3 is generated corresponding to the Wang V, and then historical user description information A and historical user description information B which are generated by the Wang V in the clothing store 1 and the clothing store 2 are obtained from pre-stored historical data.
Then, similarity comparison is carried out on the face feature recognition result 3 and historical face recognition results A and historical face recognition results B contained in the historical user description information A and the historical user description information B respectively, and on the assumption that the similarity between the face feature recognition result 3 and the historical face recognition results A and the historical face recognition results B is respectively 90% and 95%, a preset first face similarity threshold value is 80%, and the historical face recognition results A and the historical face recognition results B with the similarity reaching 80% to the face feature recognition result 3 are screened out and used as reference face recognition results 1 and reference face recognition results 2.
Then, supposing that the preset information similarity threshold value is 60%, determining historical consultation information A and historical consultation information B corresponding to the obtained reference face recognition result 1 and the reference face recognition result 2, and respectively comparing the similarity of the consultation information 3 with the historical consultation information A and the historical consultation information B, supposing that the similarity of the consultation information 1 with the historical consultation information A and the historical consultation information B is 0% and 0%, and not screening the historical consultation information with the similarity of 60% with the consultation information 3, obtaining the historical non-target person information A and the historical non-target person information B corresponding to the reference face recognition result 1 and the reference face recognition result 2, and determining that the number of each piece of historical non-target person information reaches the preset second number threshold value 2, and then carrying out the historical non-target person information A, the historical non-target person information B and the corresponding reference face recognition result 1, the historical non-target person information A and the historical non-target person information B, The face recognition result 2 is referred to as reference user description information 1 and reference user description information 2.
Then, the preset first number threshold value is 2, the reference user description information 1 corresponds to the clothing store 1, the reference user description information 2 corresponds to the clothing store 2, the number of different historical target places corresponding to each obtained reference user description information is determined to be 2, meanwhile, the reference time is determined to be 2019, 5 and 31 based on the generation time 2019, 6 and 6 of the user description information 3 and the preset time range in the representation week, the reference user description information 1 and the reference user description information 2 are sorted according to the respective generation time to generate a sorting result, based on the sorting result, the reference user description information with the earliest generation time is determined to be the reference user description information 1, the generation time of the reference user description information 1 is determined to be 2019, 6 and 4 days later than 2019, 5 and 31 days later, the reference user description information 1 is determined, The respective generation times of the user description information 2 are referred to, and a preset time range is satisfied.
At this time, it is determined that the suspected malicious price competition behavior exists in wang five, and it is seen that the suspected malicious price competition behavior belongs to an abnormal behavior, at this time, a short message mode is adopted to send alarm indication information containing a face image 3 to a contact 1, a contact 2 and a contact 3, wherein the contact 1 is a worker located in the clothing store 3, the contact 2 is a worker located in the clothing store 1, and the contact 3 is a worker located in the clothing store 3.
Based on the same inventive concept, referring to fig. 2, an embodiment of the present disclosure provides an abnormal behavior recognition apparatus, which at least includes:
a memory 201 for storing executable instructions;
a processor 202 for reading and executing the executable instructions stored in the memory, and performing the following processes:
acquiring a face image of a target person at a target location, generating a corresponding face recognition result by adopting a preset image recognition algorithm, and generating corresponding user description information corresponding to the target person on the basis of the face recognition result and consultation information triggered by the target person;
acquiring historical user description information generated at each historical target location other than the target location from pre-stored historical data, respectively comparing the user description information with the similarity of the historical user description information, and screening out the historical user description information of which the similarity with the user description information reaches a preset similarity threshold value as reference user description information;
and determining that the number of different historical target places corresponding to each obtained reference user description information reaches a preset first number threshold value, and determining that the target person has abnormal behavior and giving an alarm when the generation time of each reference user description information meets a preset time range.
Optionally, when the user description information is compared with the similarity of each historical user description information, and the historical user description information whose similarity with the user description information reaches a preset similarity threshold is screened out as reference user description information, the processor 202 is specifically configured to:
comparing the face feature recognition result with each historical face recognition result contained in each historical user description information respectively, and screening out the historical face recognition result of which the similarity with the face feature recognition result reaches a preset first face similarity threshold value as a reference face recognition result;
determining each historical consultation information corresponding to each obtained reference face recognition result, respectively comparing the similarity of the consultation information with each historical consultation information, and screening out the historical consultation information of which the similarity with the consultation information reaches a preset information similarity threshold value as reference consultation information;
and using the obtained reference consultation information and the corresponding reference face recognition result as the reference user description information.
Optionally, the user description information further includes non-target person information, where the non-target person information represents other persons existing in the target location, whose distance from the target person is smaller than a preset distance threshold and whose appearance time with the target person is greater than a preset time threshold, and then the history user description information whose similarity with the user description information reaches a preset similarity threshold is screened out, and when the history user description information is used as reference user description information, the processor 202 is further configured to:
if not, acquiring historical non-target character information corresponding to each reference face recognition result;
and when the number of the historical non-target person information reaches a preset second number threshold value, taking the historical non-target person information and corresponding reference face recognition results as reference user description information.
Optionally, the respective generation time of each piece of reference user description information is determined, and when a preset time range is met, the processor 202 is specifically configured to:
determining reference time based on the generation time of the user description information and a preset time range;
sequencing the reference user description information according to respective generation time to generate a sequencing result, and determining the reference user description information with the earliest generation time based on the sequencing result;
and determining the generation time of the reference user description information, and determining the generation time of each reference user description information when the generation time is later than the reference time, so as to meet a preset time range.
Optionally, when determining that the target person has an abnormal behavior and performing an alarm, the processor 202 is specifically configured to:
and determining that the target person has abnormal behaviors, and sending alarm indication information at least comprising the face image to a plurality of preset contacts in a preset information sending mode, wherein the plurality of contacts at least comprise working personnel located at the target place.
Wherein in fig. 2 the bus architecture may include any number of interconnected buses and bridges, with one or more processors, represented in particular by processor 202, and various circuits of memory, represented by memory 201, being linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface. The transceiver 203 may be a number of elements, including a transmitter and a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 202 is responsible for managing the bus architecture and general processing, and the memory 201 may store data used by the processor 202 in performing operations.
Based on the same inventive concept, referring to fig. 3, in an embodiment of the present disclosure, an abnormal behavior recognition apparatus is provided, which at least includes: an identification unit 301, a screening unit 302 and an alarm unit 303, wherein,
the identification unit 301 is configured to acquire a face image of a target person at a target location, generate a corresponding face identification result by using a preset image identification algorithm, and generate corresponding user description information corresponding to the target person based on the face identification result and consultation information triggered by the target person;
a screening unit 302, configured to obtain, from pre-stored historical data, each historical user description information generated at each historical target location other than the target location, compare the user description information with each historical user description information in similarity, and screen out historical user description information whose similarity with the user description information reaches a preset similarity threshold value, as reference user description information;
and an alarm unit 303, configured to determine that the number of different historical target locations corresponding to each obtained reference user description information reaches a preset first number threshold, and when the generation time of each reference user description information meets a preset time range, determine that an abnormal behavior exists in the target person, and perform an alarm.
Optionally, when the user description information is compared with the similarity of each historical user description information, and the historical user description information whose similarity with the user description information reaches a preset similarity threshold is screened out as the reference user description information, the screening unit 302 is specifically configured to:
comparing the face feature recognition result with each historical face recognition result contained in each historical user description information respectively, and screening out the historical face recognition result of which the similarity with the face feature recognition result reaches a preset first face similarity threshold value as a reference face recognition result;
determining each historical consultation information corresponding to each obtained reference face recognition result, respectively comparing the similarity of the consultation information with each historical consultation information, and screening out the historical consultation information of which the similarity with the consultation information reaches a preset information similarity threshold value as reference consultation information;
and using the obtained reference consultation information and the corresponding reference face recognition result as the reference user description information.
Optionally, the user description information further includes non-target person information, where the non-target person information represents other persons existing in the target location, whose distance from the target person is smaller than a preset distance threshold and whose appearance time with the target person is greater than a preset time threshold, and then the history user description information whose similarity with the user description information reaches a preset similarity threshold is screened out, and when the history user description information is used as the reference user description information, the screening unit 302 is further configured to:
if not, acquiring historical non-target character information corresponding to each reference face recognition result;
and when the number of the historical non-target person information reaches a preset second number threshold value, taking the historical non-target person information and corresponding reference face recognition results as reference user description information.
Optionally, the generation time of each piece of reference user description information is determined, and when a preset time range is met, the alarm unit 303 is specifically configured to:
determining reference time based on the generation time of the user description information and a preset time range;
sequencing the reference user description information according to respective generation time to generate a sequencing result, and determining the reference user description information with the earliest generation time based on the sequencing result;
and determining the generation time of the reference user description information, and determining the generation time of each reference user description information when the generation time is later than the reference time, so as to meet a preset time range.
Optionally, when determining that the target person has an abnormal behavior and performing an alarm, the alarm unit 303 is specifically configured to:
and determining that the target person has abnormal behaviors, and sending alarm indication information at least comprising the face image to a plurality of preset contacts in a preset information sending mode, wherein the plurality of contacts at least comprise working personnel located at the target place.
Based on the same inventive concept, the embodiments of the present disclosure provide a storage medium, and when executed by a processor, the instructions in the storage medium enable the processor to perform the method for identifying abnormal behavior as described in any one of the above embodiments.
In summary, in the embodiment of the present disclosure, a face image of a target person is obtained at a target location, corresponding user description information is generated based on a face recognition result and consultation information triggered by the target person, then, reference user description information meeting a similarity condition is screened out from pre-stored historical data, then, the number of different historical target locations corresponding to each obtained reference user description information and the generation time of each reference user description information are determined, and when a corresponding condition is met, it is determined that an abnormal behavior exists in the target person, and an alarm is given. Therefore, when the target person appears in the target place, whether the target person has abnormal behaviors or not can be quickly and accurately identified according to the historical data, and therefore warning is timely conducted.
For the system/apparatus embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference may be made to some descriptions of the method embodiments for relevant points.
It is to be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or operation from another entity or operation without necessarily requiring or implying any actual such relationship or order between such entities or operations.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present disclosure have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the disclosure.
It will be apparent to those skilled in the art that various changes and modifications can be made in the present disclosure without departing from the spirit and scope of the disclosure. Thus, if such modifications and variations of the present disclosure fall within the scope of the claims of the present disclosure and their equivalents, the present disclosure is intended to include such modifications and variations as well.

Claims (10)

1. A method for identifying abnormal behavior, comprising:
acquiring a face image of a target person at a target location, generating a corresponding face recognition result by adopting a preset image recognition algorithm, and generating corresponding user description information corresponding to the target person on the basis of the face recognition result and consultation information triggered by the target person;
acquiring historical user description information generated at each historical target location other than the target location from pre-stored historical data, respectively comparing the user description information with the similarity of the historical user description information, and screening out the historical user description information of which the similarity with the user description information reaches a preset similarity threshold value as reference user description information;
and determining that the number of different historical target places corresponding to each obtained reference user description information reaches a preset first number threshold value, and determining that the target person has abnormal behavior and giving an alarm when the generation time of each reference user description information meets a preset time range.
2. The method according to claim 1, wherein the comparing the similarity between the user description information and each historical user description information, and the screening of the historical user description information whose similarity with the user description information reaches a preset similarity threshold value as the reference user description information specifically includes:
comparing the face feature recognition result with each historical face recognition result contained in each historical user description information respectively, and screening out the historical face recognition result of which the similarity with the face feature recognition result reaches a preset first face similarity threshold value as a reference face recognition result;
determining each historical consultation information corresponding to each obtained reference face recognition result, respectively comparing the similarity of the consultation information with each historical consultation information, and screening out the historical consultation information of which the similarity with the consultation information reaches a preset information similarity threshold value as reference consultation information;
and using the obtained reference consultation information and the corresponding reference face recognition result as the reference user description information.
3. The method of claim 2, wherein the user description information further includes non-target person information, the non-target person information represents other persons existing in the target location, the distance between the other persons and the target person is smaller than a preset distance threshold value, and the co-occurrence time of the other persons and the target person is greater than a preset time threshold value, and then the historical user description information with the similarity to the user description information reaching a preset similarity threshold value is screened out as reference user description information, and the method further includes:
if not, acquiring historical non-target character information corresponding to each reference face recognition result;
and when the number of the historical non-target character information reaches a preset second number threshold value, taking the historical non-target character information and corresponding reference face recognition results as reference user description information.
4. The method according to claim 1, 2 or 3, wherein determining the respective generation time of each piece of reference user description information to satisfy a preset time range specifically comprises:
determining reference time based on the generation time of the user description information and a preset time range;
sequencing the reference user description information according to respective generation time to generate a sequencing result, and determining the reference user description information with the earliest generation time based on the sequencing result;
and determining the generation time of the reference user description information, and determining the generation time of each reference user description information when the generation time is later than the reference time, so as to meet a preset time range.
5. The method of claim 1, 2 or 3, wherein determining that the target person has abnormal behavior and alerting comprises:
and determining that the target person has abnormal behaviors, and sending alarm indication information at least comprising the face image to a plurality of preset contacts in a preset information sending mode, wherein the plurality of contacts at least comprise working personnel located at the target place.
6. An apparatus for identifying abnormal behavior, comprising:
the system comprises an identification unit, a query unit and a processing unit, wherein the identification unit is used for acquiring a face image of a target person at a target place, generating a corresponding face identification result by adopting a preset image identification algorithm, and generating corresponding user description information corresponding to the target person based on the face identification result and consultation information triggered by the target person;
the screening unit is used for acquiring historical user description information generated at each historical target location other than the target location from pre-stored historical data, comparing the similarity of the user description information with the historical user description information respectively, and screening the historical user description information of which the similarity with the user description information reaches a preset similarity threshold value as reference user description information;
and the alarm unit is used for determining that the number of different historical target places corresponding to the obtained reference user description information reaches a preset first number threshold value, and determining that the target character has abnormal behavior and giving an alarm when the generation time of each reference user description information meets a preset time range.
7. The apparatus according to claim 6, wherein the user description information is compared with the respective historical user description information in terms of similarity, and historical user description information whose similarity with the user description information reaches a preset similarity threshold is screened out, and when serving as the reference user description information, the screening unit is specifically configured to:
comparing the face feature recognition result with each historical face recognition result contained in each historical user description information respectively, and screening out the historical face recognition result of which the similarity with the face feature recognition result reaches a preset first face similarity threshold value as a reference face recognition result;
determining each historical consultation information corresponding to each obtained reference face recognition result, respectively comparing the similarity of the consultation information with each historical consultation information, and screening out the historical consultation information of which the similarity with the consultation information reaches a preset information similarity threshold value as reference consultation information;
and using the obtained reference consultation information and the corresponding reference face recognition result as the reference user description information.
8. The apparatus of claim 7, wherein the user description information further includes non-target person information, the non-target person information represents other persons existing in the target location, the distance between the other persons and the target person is smaller than a preset distance threshold, and the co-occurrence time of the other persons and the target person is greater than a preset time threshold, and the screening unit is further configured to, when the historical user description information whose similarity with the user description information reaches a preset similarity threshold is screened out as the reference user description information:
if not, acquiring historical non-target character information corresponding to each reference face recognition result;
and when the number of the historical non-target character information reaches a preset second number threshold value, taking the historical non-target character information and corresponding reference face recognition results as reference user description information.
9. An apparatus for identifying abnormal behavior, comprising:
a memory for storing executable instructions;
a processor for reading and executing executable instructions stored in the memory to implement the method of identifying abnormal behavior of any one of claims 1 to 5.
10. A storage medium, characterized in that instructions in the storage medium, when executed by a processor, enable the processor to perform a method of identifying abnormal behavior as claimed in any one of claims 1 to 5.
CN202010467702.6A 2020-05-28 2020-05-28 Abnormal behavior identification method and device Active CN111639213B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010467702.6A CN111639213B (en) 2020-05-28 2020-05-28 Abnormal behavior identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010467702.6A CN111639213B (en) 2020-05-28 2020-05-28 Abnormal behavior identification method and device

Publications (2)

Publication Number Publication Date
CN111639213A true CN111639213A (en) 2020-09-08
CN111639213B CN111639213B (en) 2023-11-14

Family

ID=72328731

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010467702.6A Active CN111639213B (en) 2020-05-28 2020-05-28 Abnormal behavior identification method and device

Country Status (1)

Country Link
CN (1) CN111639213B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177795A (en) * 2021-06-10 2021-07-27 支付宝(杭州)信息技术有限公司 Identity recognition method, device, equipment and medium
CN113205876A (en) * 2021-07-06 2021-08-03 明品云(北京)数据科技有限公司 Method, system, electronic device and medium for determining effective clues of target person

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2862847A1 (en) * 2013-08-29 2015-02-28 Accenture Global Services Limited Identification system
CN110287889A (en) * 2019-06-26 2019-09-27 银河水滴科技(北京)有限公司 A kind of method and device of identification
CN110659397A (en) * 2018-06-28 2020-01-07 杭州海康威视数字技术股份有限公司 Behavior detection method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2862847A1 (en) * 2013-08-29 2015-02-28 Accenture Global Services Limited Identification system
CN110659397A (en) * 2018-06-28 2020-01-07 杭州海康威视数字技术股份有限公司 Behavior detection method and device, electronic equipment and storage medium
CN110287889A (en) * 2019-06-26 2019-09-27 银河水滴科技(北京)有限公司 A kind of method and device of identification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邢延超,谈正: "识别时间可控的人脸识别方案" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177795A (en) * 2021-06-10 2021-07-27 支付宝(杭州)信息技术有限公司 Identity recognition method, device, equipment and medium
CN113205876A (en) * 2021-07-06 2021-08-03 明品云(北京)数据科技有限公司 Method, system, electronic device and medium for determining effective clues of target person

Also Published As

Publication number Publication date
CN111639213B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
US9661010B2 (en) Security log mining devices, methods, and systems
CN111401777B (en) Enterprise risk assessment method, enterprise risk assessment device, terminal equipment and storage medium
CN110689438A (en) Enterprise financial risk scoring method and device, computer equipment and storage medium
TW201802732A (en) Method and device for controlling data risk
Noviandy et al. Credit Card Fraud Detection for Contemporary Financial Management Using XGBoost-Driven Machine Learning and Data Augmentation Techniques
CN112669138A (en) Data processing method and related equipment
CN113592019A (en) Fault detection method, device, equipment and medium based on multi-model fusion
CN113792089B (en) Illegal behavior detection method, device, equipment and medium based on artificial intelligence
CN111639213A (en) Abnormal behavior identification method and device
CN111950621A (en) Target data detection method, device, equipment and medium based on artificial intelligence
CN111062642A (en) Method and device for identifying industrial risk degree of object and electronic equipment
CN115081538A (en) Customer relationship identification method, device, equipment and medium based on machine learning
CN110942314A (en) Abnormal account supervision method and device
CN113178071B (en) Driving risk level identification method and device, electronic equipment and readable storage medium
CN112200402A (en) Risk quantification method, device and equipment based on risk portrait
JP2021197089A (en) Output device, output method, and output program
CN115860465A (en) Enterprise associated data processing early warning method, system and device
CN115408672A (en) Deep early warning method, device, equipment and storage medium for blacklist
CN115689713A (en) Abnormal risk data processing method and device, computer equipment and storage medium
CN114372892A (en) Payment data monitoring method, device, equipment and medium
CN111915430A (en) Vehicle loan risk identification method and device based on vehicle frame number
CN114066209A (en) Service distribution method, device, equipment and computer storage medium
CN111382343B (en) Label system generation method and device
CN113051136A (en) Monitoring analysis method and device for unattended equipment
CN110543910A (en) Credit state monitoring system and monitoring method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant