CN115376192B - User abnormal behavior determination method, device, computer equipment and storage medium - Google Patents

User abnormal behavior determination method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN115376192B
CN115376192B CN202211065727.9A CN202211065727A CN115376192B CN 115376192 B CN115376192 B CN 115376192B CN 202211065727 A CN202211065727 A CN 202211065727A CN 115376192 B CN115376192 B CN 115376192B
Authority
CN
China
Prior art keywords
target
face
clustered
determining
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211065727.9A
Other languages
Chinese (zh)
Other versions
CN115376192A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Real AI Technology Co Ltd
Original Assignee
Beijing Real AI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Real AI Technology Co Ltd filed Critical Beijing Real AI Technology Co Ltd
Priority to CN202211065727.9A priority Critical patent/CN115376192B/en
Publication of CN115376192A publication Critical patent/CN115376192A/en
Application granted granted Critical
Publication of CN115376192B publication Critical patent/CN115376192B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks

Abstract

The embodiment of the application discloses a method, a device, computer equipment and a storage medium for determining abnormal behaviors of a user. The method comprises the following steps: acquiring an image set in a target history period from a plurality of data sources, wherein the image set comprises a plurality of face images to be clustered with space-time information; then clustering is carried out on the face images to be clustered to obtain a target face file of the target user; acquiring historical behavior data of a target user in at least one social platform target historical period and operator communication data; generating an actual track of the target user in a target historical period according to the historical behavior data, the operator communication data and the target face file; and finally judging whether the target user has abnormal behaviors according to the track to be confirmed and the actual track. According to the method and the device, the track to be confirmed is confirmed through the obtained actual track, so that the situation that the track to be confirmed cannot be confirmed or errors are confirmed is reduced, and whether the target user has abnormal behaviors can be accurately confirmed.

Description

User abnormal behavior determination method, device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to a method and apparatus for determining abnormal behavior of a user, a computer device, and a storage medium.
Background
There are many scenarios in which it is necessary to verify the true condition of a person's trajectory, for example, when a police inquires about a suspected person, it is necessary to verify whether a path trajectory dictated by the suspected person is true.
At present, whether a path is accurate is generally confirmed by a witness or whether the path is accurate is generally confirmed by mobile phone positioning of a suspected person, however, the conditions that the witness provides fake mouths, no witness exists, and the mobile phone is handed to a person to be kept for counterfeiting a route exist, so that the track cannot be confirmed or the condition that an error is confirmed occurs, and whether the user has abnormal behaviors cannot be determined.
Disclosure of Invention
The embodiment of the application provides a method, a device, computer equipment and a storage medium for determining abnormal behaviors of a user, which are used for jointly generating an actual track of the target user through historical behavior data, operator communication data and a target face file, and verifying the track to be confirmed according to the actual track, so that the situation that the track to be confirmed cannot be confirmed or errors are confirmed is reduced, and whether the abnormal behaviors exist in the target user can be confirmed more accurately.
In a first aspect, an embodiment of the present application provides a method for determining abnormal behavior of a user, including:
acquiring an image set in a target history period from a plurality of data sources, wherein the image set comprises a plurality of face images to be clustered with space-time information;
clustering the face images to be clustered to obtain a target face file of a target user;
acquiring historical behavior data and operator communication data of the target user in the target historical period of at least one social platform;
generating an actual track of the target user in the target historical period according to the historical behavior data, the operator communication data and the target face file;
and judging whether the target user has abnormal behaviors according to the track to be confirmed and the actual track.
In a second aspect, an embodiment of the present application further provides a device for determining abnormal behavior of a user, including:
the receiving and transmitting module is used for acquiring an image set in a target history period from a plurality of data sources, wherein the image set comprises a plurality of face images to be clustered with space-time information;
the processing module is used for carrying out clustering processing on the face images to be clustered to obtain a target face file of a target user; acquiring historical behavior data and operator communication data of the target user in the target historical period of at least one social platform; generating an actual track of the target user in the target historical period according to the historical behavior data, the operator communication data and the target face file; and judging whether the target user has abnormal behaviors according to the track to be confirmed and the actual track.
In some embodiments, the processing module is specifically configured to, when executing the step of generating the actual trajectory of the target user within the target history period according to the historical behavior data, the operator communication data, and the target face profile:
extracting first time-space information from the historical behavior data;
extracting second spatiotemporal information from the operator communication data;
extracting third space-time information from the target face file;
and determining the actual track according to the first time-space information, the second time-space information, the third time-space information, the first weight corresponding to the first time-space information, the second weight corresponding to the second time-space information and the third weight corresponding to the third time-space information, wherein the third weight is greater than the second weight and the second weight is greater than the first weight.
In some embodiments, the processing module is specifically configured to, when executing the step of determining whether the target user has abnormal behavior according to the track to be confirmed and the actual track:
determining the matching degree of the track to be confirmed and the actual track;
if the matching degree is greater than or equal to a preset matching degree threshold value, determining that the target user does not have abnormal behaviors;
If the matching degree is smaller than a preset matching degree threshold value, determining that the target user has abnormal behaviors.
In some embodiments, when the step of clustering the target face image to obtain the target face file is performed, the processing module is specifically configured to:
determining the face characteristics of the face images to be clustered;
determining target similarity scores between every two face images to be clustered according to the face features;
clustering the face images to be clustered according to the target similarity score to obtain at least one face file;
and determining the target face file from at least one face file according to the target face characteristics of the target user.
In some embodiments, when the step of determining the target similarity score between the face images to be clustered according to the face features is performed, the processing module is specifically configured to:
performing pairwise comparison processing on the face features to obtain initial similarity scores between face images to be clustered;
and carrying out similarity score remapping processing on the initial similarity score according to a preset similarity score remapping network model to obtain the target similarity score.
In some embodiments, after executing the step of clustering the face images to be clustered according to the target similarity score to obtain at least one face file, the processing module is further configured to:
and storing at least one face file into a preset face file library.
In some embodiments, the image set further includes a human body image corresponding to the face image to be clustered, and before executing the step of clustering the face image to be clustered to obtain the target face profile of the target user, the processing module is further configured to:
determining a human feature of the human image;
determining the human body image with the human body characteristics conforming to the target human body characteristics as a target human body image;
screening the face images to be clustered according to the target human body image to obtain a target human face image;
and taking the target face image as a face image to be clustered in the image set.
In a third aspect, embodiments of the present application further provide a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the method when executing the computer program.
In a fourth aspect, embodiments of the present application also provide a computer readable storage medium storing a computer program comprising program instructions which, when executed by a processor, implement the above-described method.
Compared with the prior art, in the scheme provided by the embodiment of the application, when the track to be confirmed needs to be verified, an image set in a target history period is acquired from a plurality of data sources, wherein the image set comprises a plurality of face images to be clustered with space-time information; then clustering the face images to be clustered to obtain a target face file of a target user; acquiring historical behavior data and operator communication data of the target user in the target historical period of at least one social platform; generating an actual track of the target user in the target historical period according to the historical behavior data, the operator communication data and the target face file; and finally judging whether the target user has abnormal behaviors according to the track to be confirmed and the actual track. According to the method and the device, the actual track of the target user is generated through the historical behavior data, the operator communication data and the target face file, and the track to be confirmed is verified according to the actual track, so that the situation that the track to be confirmed cannot be confirmed or errors are confirmed is reduced, and whether the target user has abnormal behaviors can be accurately confirmed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1a is a schematic diagram of a system for determining abnormal behavior of a user according to an embodiment of the present application;
fig. 1b is an application scenario schematic diagram of a system for determining abnormal behavior of a user according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a method for determining abnormal behavior of a user according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of a method for determining abnormal behavior of a user according to an embodiment of the present application;
fig. 4 is a schematic diagram of a clustering flow in the method for determining abnormal behavior of a user according to the embodiment of the present application;
FIG. 5 is a schematic diagram of a to-be-confirmed track and an actual track when the to-be-confirmed track and the actual track are completely matched according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a to-be-confirmed track and an actual track when they pass through in a matching manner according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a to-be-confirmed track and an actual track when they do not pass through;
FIG. 8 is a schematic block diagram of a user abnormal behavior determination apparatus provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a server according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a terminal according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
The terms first, second and the like in the description and in the claims of the embodiments and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those explicitly listed but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus, such that the partitioning of modules by embodiments of the application is only one logical partitioning, such that a plurality of modules may be combined or integrated in another system, or some features may be omitted, or not implemented, and further that the coupling or direct coupling or communication connection between modules may be via some interfaces, such that indirect coupling or communication connection between modules may be electrical or other like, none of the embodiments of the application are limited. The modules or sub-modules described as separate components may or may not be physically separate, may or may not be physical modules, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to achieve the purposes of the embodiments of the present application.
The embodiment of the application provides a method, a device, a computer device and a storage medium for determining abnormal user behavior, wherein an execution subject of the method for determining abnormal user behavior can be the device for determining abnormal user behavior provided by the embodiment of the application, or the computer device integrated with the device for determining abnormal user behavior, wherein the device for determining abnormal user behavior can be realized in a hardware or software mode, and the computer device can be a terminal or a server.
When the computer device is a server, the server may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms, and the like.
When the computer device is a terminal, the terminal may include: smart phones, tablet computers, notebook computers, desktop computers, smart televisions, smart speakers, personal digital assistants (hereinafter abbreviated as PDA, english: personal Digital Assistant), desktop computers, smart watches, and the like, which carry multimedia data processing functions (e.g., video data playing functions, music data playing functions), but are not limited thereto.
The scheme of the embodiment of the application can be realized based on an artificial intelligence technology, and particularly relates to the technical field of computer vision in the artificial intelligence technology and the fields of cloud computing, cloud storage, databases and the like in the cloud technology, and the technical fields are respectively described below.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Computer Vision (CV) is a science of studying how to "look" a machine, and more specifically, to replace human eyes with a camera and a Computer to perform machine Vision such as recognition, tracking and measurement on a target, and further perform graphic processing to make the Computer process into an image more suitable for human eyes to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision technologies generally include technologies such as image processing, face recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning, map construction, and the like, and also include common biometric technologies such as face recognition, fingerprint recognition, and the like.
With research and advancement of artificial intelligence technology, research and application of artificial intelligence technology is being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, automatic driving, unmanned aerial vehicles, robots, smart medical treatment, smart customer service, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and with increasing importance value.
The solution of the embodiment of the present application may be implemented based on cloud technology, and in particular, relates to the technical fields of cloud computing, cloud storage, database, and the like in the cloud technology, and will be described below.
Cloud technology (Cloudtechnology) refers to a hosting technology that unifies serial resources such as hardware, software, network, etc. in a wide area network or a local area network to realize calculation, storage, processing and sharing of data. Cloud technology (cloud technology) is based on the general terms of network technology, information technology, integration technology, management platform technology, application technology and the like applied by cloud computing business modes, and can form a resource pool, and the cloud computing business mode is flexible and convenient as required. Cloud computing technology will become an important support. Background services of technical networking systems require a significant amount of computing, storage resources, such as video websites, image-like websites, and more portals. Along with the high development and application of the internet industry, each article possibly has an own identification mark in the future, the identification mark needs to be transmitted to a background system for logic processing, data with different levels can be processed separately, and various industry data needs strong system rear shield support and can be realized only through cloud computing. According to the embodiment of the application, the identification result can be stored through cloud technology.
Cloud storage (cloud storage) is a new concept that extends and develops in the concept of cloud computing, and a distributed cloud storage system (hereinafter referred to as a storage system for short) refers to a storage system that integrates a large number of storage devices (storage devices are also referred to as storage nodes) of various types in a network to work cooperatively through application software or application interfaces through functions such as cluster application, grid technology, and a distributed storage file system, so as to provide data storage and service access functions for the outside. In the embodiment of the application, the information such as network configuration and the like can be stored in the storage system, so that the server can conveniently call the information.
At present, the storage method of the storage system is as follows: when creating logical volumes, each logical volume is allocated a physical storage space, which may be a disk composition of a certain storage device or of several storage devices. The client stores data on a certain logical volume, that is, the data is stored on a file system, the file system divides the data into a plurality of parts, each part is an object, the object not only contains the data but also contains additional information such as a data Identification (ID) and the like, the file system writes each object into a physical storage space of the logical volume, and the file system records storage position information of each object, so that when the client requests to access the data, the file system can enable the client to access the data according to the storage position information of each object.
The process of allocating physical storage space for the logical volume by the storage system specifically includes: physical storage space is divided into stripes in advance according to the set of capacity measures for objects stored on a logical volume (which measures tend to have a large margin with respect to the capacity of the object actually to be stored) and redundant array of independent disks (RAID, redundant Array ofIndependent Disk), and a logical volume can be understood as a stripe, whereby physical storage space is allocated for the logical volume.
The Database (Database), which can be considered as an electronic filing cabinet, is a place for storing electronic files, and users can perform operations such as adding, inquiring, updating, deleting and the like on the data in the files. A "database" is a collection of data stored together in a manner that can be shared with multiple users, with as little redundancy as possible, independent of the application.
The database management system (Database Management System, abbreviated as DBMS) is a computer software system designed for managing databases, and generally has basic functions of storage, interception, security, backup and the like. The database management system may classify according to the database model it supports, e.g., relational, XML (Extensible Markup Language ); or by the type of computer supported, e.g., server cluster, mobile phone; or by the query language used, e.g., SQL (structured query language ), XQuery; or by performance impact emphasis, such as maximum scale, maximum speed of operation; or other classification schemes. Regardless of the manner of classification used, some DBMSs are able to support multiple query languages across categories, for example, simultaneously. In the embodiment of the application, the identification result can be stored in the database management system, so that the server can conveniently call.
It should be specifically noted that, the service terminal according to the embodiments of the present application may be a device that provides voice and/or data connectivity to the service terminal, a handheld device with a wireless connection function, or other processing device connected to a wireless modem. Such as mobile telephones (or "cellular" telephones) and computers with mobile terminals, which can be portable, pocket, hand-held, computer-built-in or car-mounted mobile devices, for example, which exchange voice and/or data with radio access networks. For example, personal communication services (English full name: personal Communication Service, english short name: PCS) telephones, cordless telephones, session Initiation Protocol (SIP) phones, wireless local loop (Wireless Local Loop, english short name: WLL) stations, personal digital assistants (English full name: personal Digital Assistant, english short name: PDA) and the like.
In some embodiments, the embodiments of the present application may be applied to a user abnormal behavior determination system 1 as shown in fig. 1a, where the user abnormal behavior determination system 1 includes a user abnormal behavior determination server (hereinafter referred to as a server) 10, a plurality of image capturing devices 20 (data sources), at least one social platform server 30, and an operator server 40, where the server 10 may perform data interaction with the image capturing devices 20, the social platform server 30, and the operator server 40.
When the method for determining abnormal behavior of a user according to the embodiment of the present application is implemented based on the system for determining abnormal behavior of a user described in fig. 1a, reference may be made to an application scenario diagram as shown in fig. 1 b.
In this embodiment, when the server 10 needs to determine whether an abnormal behavior exists in a target historical period of a target user, acquiring face images to be clustered in the target historical period from a plurality of image acquisition devices 20 to form an image set, wherein the acquired face images to be clustered have space-time information, then clustering the images to be clustered in the image set to obtain a target face file of the target user, and further, acquiring historical behavior data of the target user in the target historical period of at least one social platform from at least one social platform server 30, and acquiring operator communication data of the target user in the target historical period from an operator server 40, and generating an actual track of the target user in the target historical period according to the historical behavior data, the operator communication data and the target face file; and finally judging whether the target user has abnormal behaviors according to the track to be confirmed and the actual track. In the embodiment, the actual track of the target user is generated by the historical behavior data, the operator communication data and the target face file, and the track to be confirmed is verified according to the actual track, so that the occurrence of the condition that the track to be confirmed cannot be confirmed or is wrong in confirmation is reduced, and whether the target user has abnormal behaviors can be accurately confirmed.
In the following, the technical solutions of the present application will be described in detail with reference to several embodiments.
Referring to fig. 2, a method for determining abnormal behavior of a user provided in an embodiment of the present application is described below, where the embodiment of the present application includes:
201. the server obtains an image set in a target history period from a plurality of data sources, wherein the image set comprises a plurality of face images to be clustered with space-time information.
The data source in this embodiment may be an image acquisition device, or a storage device storing the face images to be clustered obtained by the image acquisition device, which is described by taking the data source as an example of the image acquisition device, where the image acquisition device in this embodiment may be an access control camera, a road camera, a camera in a cell, and the like, and specific installation positions and specific types of the cameras are not limited herein.
The target history period in the embodiment is a period corresponding to a track to be confirmed, and may be one day, several days or several hours in history time.
It should be noted that, the face images to be clustered acquired in this embodiment all have spatial-temporal information, where the spatial-temporal information includes shooting time and shooting location when the image acquisition device shoots the face images to be clustered.
Specifically, the server in this embodiment may first send a face image acquisition instruction in a target history period to the image acquisition apparatus, and after receiving the face image acquisition instruction in the target history period, the image acquisition apparatus sends the face image in the target history period to the server.
202. And the server performs clustering processing on the face images to be clustered to obtain a target face file of the target user.
Specifically, the face image in the target face file is a face image matched with the face of the target user in the face images to be clustered, wherein the step of determining the target face file is shown in fig. 3:
as shown in fig. 3, in order to obtain a target face file of a target user, so as to facilitate subsequent generation of a real track of the user according to the target face file, in some embodiments, step 202 specifically includes:
2021. and determining the face characteristics of the face images to be clustered.
In some real-time, the face images to be clustered are input into a face feature extraction model to obtain the face features of the face images to be clustered.
2022. And determining target similarity scores between every two face images to be clustered according to the face features.
In some embodiments, when calculating the target similarity score between every two face images to be clustered, the method specifically includes the following steps:
performing pairwise comparison processing on the face features to obtain initial similarity scores between face images to be clustered; and carrying out similarity score remapping processing on the initial similarity score according to a preset similarity score remapping network model to obtain the target similarity score.
The similarity score remapping network model can be a graph rolling network (Graph Convolutional Network, GCN) model, and the similarity score graph subjected to convolution processing of the similarity score remapping processing can use similarity information among node neighbors to enable target similarity scores among face images to be clustered to be more accurate.
Therefore, the target similarity score processed by the similarity score remapping network model is more accurate, so that the clustering accuracy can be improved by using the target similarity score for clustering later.
In order to more intuitively show the clustering process to the user, in some embodiments, after determining similarity scores between face images to be clustered, a graph associated with each other, that is, a face similarity adjacency graph is obtained. In the graph, each node corresponds to one face image to be clustered, and the obtained similarity scores form edges among part of face images to be clustered.
In some embodiments, referring to fig. 4, a face similarity adjacency graph is obtained through target similarity scores between face images to be clustered, then similarity scores between nodes in the face similarity adjacency graph are remapped through a GCN model, and the remapped face similarity adjacency graph is obtained.
2023. And clustering the face images to be clustered according to the target similarity score to obtain at least one face file.
In order to avoid repeated clustering caused by using the face files corresponding to the target time period in the following steps, after at least one face file is obtained in this embodiment, at least one face file needs to be stored in a preset face file library.
2024. And determining the target face file from at least one face file according to the target face characteristics of the target user.
That is, when clustering face images, the embodiment first needs to extract face features of each face image to be clustered, then determines target similarity scores between every two face images to be clustered according to the face features, then clusters the face images to be clustered according to the target similarity scores to obtain at least one face file, then obtains target face features of a target user, and extracts a face file closest to the features between the target face features from the at least one face file as a target face file.
In some embodiments, considering that the number of face images to be clustered acquired by a plurality of data sources is too large, the server performs the document aggregation calculation on a large number of face images at the same time, the document aggregation is very high in complexity, in order to reduce the complexity of document aggregation, the embodiment performs the document aggregation operation with a region as a hierarchy, for example, firstly performs the document aggregation operation with a single image acquisition device as a unit, and then performs the document aggregation operation between documents well aggregated in the same cell, and the process is repeated until finally performs the document aggregation in all regions, so that the complexity of document aggregation can be remarkably reduced.
Therefore, according to the embodiment, the face image to be clustered can be divided into a plurality of computing blocks according to the space-time information in the face image to be clustered, then the clustering computation is carried out on each computing block to obtain a plurality of first clustering results, then the first clustering results are divided into a plurality of computing blocks, and then the clustering operation is carried out on each computing block, so that the clustering operation is carried out according to the region level, the completion of the clustering of all images is known, and the complexity of the clustering can be reduced through the hierarchical clustering method.
In some embodiments, considering that the snap face records generally snap corresponding human body images, in order to reduce image images of subsequent human face clusters, the embodiment can screen the human face images according to the snap human body images, so as to reduce the number of human face images participating in the clustering, further reduce the calculated amount of the clustering, and improve the clustering speed.
At this time, specifically, the image set further includes a human body image corresponding to the face image to be clustered, and before the face image to be clustered is clustered to obtain the target face file of the target user, the method further includes: determining a human feature of the human image; determining the human body image with the human body characteristics conforming to the target human body characteristics as a target human body image; screening the face images to be clustered according to the target human body image to obtain a target human face image; and taking the target face image as a face image to be clustered in the image set. The target human body characteristics are human body characteristics corresponding to the target user, for example, height characteristics, fat-thin characteristics and the like of the target user.
That is, before the face image clustering, the human body features of each human body image are acquired, then the human body features which are not matched with the target human body features are filtered to obtain the target human body features, further the target face image corresponding to the target human body features is obtained, and then the target face images are gathered, wherein the target human body features are generated according to the human body features of the target user.
Therefore, the target face images in the embodiment are used as the face images to be clustered in the image set, only the filtered target face images are required to be clustered, and the number of the target face images participating in clustering is smaller than that of the face images to be clustered which are initially acquired, so that the number of the face images to be clustered participating in clustering can be reduced, and the clustering speed is improved.
203. The server obtains historical behavior data of the target user in the target historical period of at least one social platform and operator communication data.
In this embodiment, the social platform includes a social platform that provides a positioning and publishing function, such as a tremble sound, a micro-letter, a micro-blog, and/or a reddish book, where the historical behavior data may be positioning information of a friend circle, time information of publishing the friend circle, and content of the friend circle when the target user publishes the friend circle through the micro-letter authenticated by the target user; and/or when the target user issues the blog through the microblog authenticated by the real name of the target user, the positioning information of the blog and the time information when the blog is issued, and the blog content; etc.
The operator communication data is base station positioning data when communication equipment such as a target mobile phone authenticated by a target user real name communicates with the base station.
Specifically, when the server needs to generate the actual track of the target user, it is required to send a historical behavior data acquisition instruction of the target user in the target historical period to the social platform server, and send operator communication data of the target user in the target historical period to the operator server.
After receiving a historical behavior data acquisition instruction of a target user in a target historical period, the social platform server retrieves the historical behavior data of the target user in the target historical period from the memory of the social platform server and feeds the historical behavior data of the target user in the target historical period back to the server; and after the operator server receives the operator communication data acquisition instruction of the target user in the target history period, the operator communication data of the target user in the target history period is called from the memory of the operator server, and the operator communication data of the target user in the target history period is fed back to the server.
When it should be noted that, the step 203 and the steps 201 to 202 have no execution sequence, that is, the step 203 may be executed simultaneously with the steps 201 to 202, or may be executed before the steps 201 to 202, and the specific execution sequence is not limited herein.
204. And the server generates an actual track of the target user in the target history period according to the history behavior data, the operator communication data and the target face file.
The actual track is a track generated by calculation according to the acquired user data, for example, the track is sequentially generated according to the time sequence of time, such as the time-space positioning information of a friend circle issued by a target user, the time-space positioning of a mobile phone and a base station of target real-name authentication, the time-space information of each face image in a target face file, and the like.
In some embodiments, specifically, step S204 includes: extracting first time-space information from the historical behavior data; extracting second spatiotemporal information from the operator communication data; extracting third space-time information from the target face file; and then determining the actual track according to the first time-space information, the second time-space information, the third time-space information, the first weight corresponding to the first time-space information, the second weight corresponding to the second time-space information and the third weight corresponding to the third time-space information, wherein the third weight is greater than the second weight and the second weight is greater than the first weight.
That is, in this embodiment, when generating an actual track, it is necessary to extract the first spatio-temporal information, the second spatio-temporal information, and the third spatio-temporal information from the historical behavior data, the operator communication data, and the target face file, and then generate the actual track according to the first spatio-temporal information, the second spatio-temporal information, and the third spatio-temporal information, and in order to avoid the occurrence of a situation that there are a plurality of track points corresponding to the same time, at this time, it is necessary to preset a weight for each spatio-temporal information, and to preferentially select a track point with a high weight as a location corresponding to the actual track.
For example, according to the first time-space information, it is known that: track point a of 15 points 05 minutes at 2022, 8, 1; from the third spatio-temporal information, it is known that: track point B of 15 points 05 minutes at 2022, 8, 1; then, at the time of 2022, 8, 1, 15 and 05 minutes, two different places are corresponding, at this time, since the weight corresponding to the third space-time information is higher than that of the first space-time information, at this time, when the actual track is generated, the place corresponding to the first space-time information at this time is deleted, and the place B is taken as the track point of the actual track at 2022, 8, 1, 15 and 05 minutes.
Therefore, in the embodiment, different weights are set for each piece of space-time information, so that the situation that different track points appear on the actual track at the same time point is prevented, and the interpretability of the actual track is improved.
In some embodiments, if the user corresponds to a plurality of track points at the same time, at this time, the user is instructed to have a behavior of intentional counterfeiting, and warning information is generated to remind the checking personnel that the user has a plurality of track points in which time period, so as to further check the track of the target user.
205. And the server judges whether the target user has abnormal behaviors according to the track to be confirmed and the actual track.
The track to be confirmed is a track which needs to be verified further, and may be a track provided by the target user when the target user is in the face of the verification personnel, for example, a track generated according to the positions of the target user at a plurality of time points, which are dictated by the target user.
In this embodiment, specifically, a matching degree between the track to be confirmed and the actual track is determined; if the matching degree is greater than or equal to a preset matching degree threshold value, determining that the target user does not have abnormal behaviors; if the matching degree is smaller than a preset matching degree threshold value, determining that the target user has abnormal behaviors, wherein the abnormal behaviors prove that the target user provides false tracks, and if the target user provides false tracks, a follow-up checking staff can check the user in a key way.
Specifically, the track to be confirmed is composed of a plurality of track space-time points to be confirmed, the actual track is composed of a plurality of actual track space-time points (in general, the actual track space-time points of the actual track are more than the track space-time points to be confirmed of the track to be confirmed), the track space-time points to be confirmed of the same time point are required to be compared with the actual track space-time points, if the comparison is successful (namely, the positions corresponding to the two tracks are the same), the track space-time points to be confirmed are indicated to pass through until all the track space-time points to be confirmed are matched, matching information of all the track space-time points to be confirmed is obtained, and matching information of all the points is synthesized to obtain the matching degree of the track to be confirmed and the actual track.
For example, if there are 10 space-time points of the track to be confirmed and 9 points pass through, the matching degree of the track to be confirmed is 90%, and if the matching degree threshold can be 85%, the matching passes through; if there are 10 space-time points of the track to be confirmed and 6 points pass through the matching, the matching degree of the track to be confirmed is 60%, and the matching does not pass.
Fig. 5 is a schematic diagram of complete matching between the track to be confirmed and the actual track, fig. 6 is a schematic diagram of passing between the track to be confirmed and the actual track (9 matching points exist in 10 track to be confirmed) and fig. 7 is a schematic diagram of not passing between the track to be confirmed and the actual track (5 matching points exist in 10 track to be confirmed).
In this embodiment, if the matching degree is greater than or equal to the preset matching degree threshold, it is indicated that the matching degree between the track to be confirmed and the actual track is higher, that is, the track points at the same time are the same in the track to be confirmed and the actual track, that is, the track to be confirmed provided by the target user is correct, no abnormal behavior exists, and if the track points at the same time are different, for example, the track point at a certain time in the track to be confirmed is a, but the track point at the time in the actual track is B, which is indicated that the track to be confirmed and the actual track are not matched at the time.
In summary, according to the scheme, the actual track of the target user is generated through the historical behavior data, the operator communication data and the target face file, and the track to be confirmed is verified according to the actual track, so that the situation that the track to be confirmed cannot be confirmed or an error is confirmed is reduced, and whether the target user has abnormal behaviors can be accurately confirmed.
In addition, the scheme combines the historical behavior data, the operator communication data and the actual track generated by the target face file, so that the track points are more abundant, and the track density of the actual track is improved.
Fig. 8 is a schematic block diagram of a user abnormal behavior determination apparatus provided in an embodiment of the present application. As shown in fig. 8, corresponding to the above method for determining abnormal behavior of a user, the present application further provides a device for determining abnormal behavior of a user. The user abnormal behavior determination apparatus includes a unit for performing the above-described user abnormal behavior determination method, and the apparatus may be configured in a terminal or a server. Specifically, referring to fig. 8, the abnormal user behavior determining apparatus 800 includes a transceiver module 801 and a processing module 802.
A transceiver module 801, configured to obtain an image set in a target history period from a plurality of data sources, where the image set includes a plurality of face images to be clustered having spatiotemporal information;
a processing module 802, configured to perform clustering processing on the face images to be clustered to obtain a target face file of a target user; acquiring historical behavior data and operator communication data of the target user in the target historical period of at least one social platform; generating an actual track of the target user in the target historical period according to the historical behavior data, the operator communication data and the target face file; and judging whether the target user has abnormal behaviors according to the track to be confirmed and the actual track.
In some embodiments, the processing module 802 is specifically configured to, when executing the step of generating the actual track of the target user in the target history period according to the historical behavior data, the operator communication data, and the target face profile:
extracting first time-space information from the historical behavior data;
extracting second spatiotemporal information from the operator communication data;
extracting third space-time information from the target face file;
and determining the actual track according to the first time-space information, the second time-space information, the third time-space information, the first weight corresponding to the first time-space information, the second weight corresponding to the second time-space information and the third weight corresponding to the third time-space information, wherein the third weight is greater than the second weight and the second weight is greater than the first weight.
In some embodiments, the processing module 802 is specifically configured to, when executing the step of determining whether the target user has abnormal behavior according to the track to be confirmed and the actual track:
determining the matching degree of the track to be confirmed and the actual track;
if the matching degree is greater than or equal to a preset matching degree threshold value, determining that the target user does not have abnormal behaviors;
If the matching degree is smaller than a preset matching degree threshold value, determining that the target user has abnormal behaviors.
In some embodiments, the processing module 802 is specifically configured to, when performing the step of clustering the target face image to obtain the target face profile:
determining the face characteristics of the face images to be clustered;
determining target similarity scores between every two face images to be clustered according to the face features;
clustering the face images to be clustered according to the target similarity score to obtain at least one face file;
and determining the target face file from at least one face file according to the target face characteristics of the target user.
In some embodiments, the processing module 802 is specifically configured to, when executing the step of determining the target similarity score between the face images to be clustered according to the face features:
performing pairwise comparison processing on the face features to obtain initial similarity scores between face images to be clustered;
and carrying out similarity score remapping processing on the initial similarity score according to a preset similarity score remapping network model to obtain the target similarity score.
In some embodiments, after performing the step of clustering the face images to be clustered according to the target similarity score, the processing module 802 is further configured to:
and storing at least one face file into a preset face file library.
In some embodiments, the image set further includes a human body image corresponding to the face image to be clustered, and before executing the step of clustering the face image to be clustered to obtain the target face profile of the target user, the processing module 802 is further configured to:
determining a human feature of the human image;
determining the human body image with the human body characteristics conforming to the target human body characteristics as a target human body image;
screening the face images to be clustered according to the target human body image to obtain a target human face image;
and taking the target face image as a face image to be clustered in the image set.
In summary, since the user abnormal behavior determining apparatus 800 in the present solution generates the actual track of the target user through the historical behavior data, the operator communication data and the target face file together, and verifies the track to be confirmed according to the actual track, the present solution reduces the occurrence of the situation that the track to be confirmed cannot be confirmed or the error is confirmed, and can more accurately confirm whether the target user has abnormal behavior.
It should be noted that, as those skilled in the art can clearly understand, the specific implementation process of the above-mentioned user abnormal behavior determining apparatus and each unit may refer to the corresponding description in the foregoing method embodiment, and for convenience and brevity of description, the description is omitted here.
The apparatus shown in fig. 8 may have a structure as shown in fig. 9, and when the apparatus shown in fig. 8 has a structure as shown in fig. 9, the processor in fig. 9 can implement the same or similar functions as the processing module provided by the apparatus embodiment corresponding to the apparatus, and the transceiver in fig. 9 can implement the same or similar functions as the transceiver module provided by the apparatus embodiment corresponding to the apparatus, and the memory in fig. 9 stores a computer program to be invoked when the processor performs the user abnormal behavior determination method. In the embodiment shown in fig. 8 of the present application, the entity device corresponding to the transceiver module may be an input/output interface, and the entity device corresponding to the processing module may be a processor.
The embodiment of the present application further provides another terminal device, as shown in fig. 10, for convenience of explanation, only a portion related to the embodiment of the present application is shown, and specific technical details are not disclosed, please refer to a method portion of the embodiment of the present application. The terminal device may be any terminal device including a mobile phone, a tablet computer, a personal digital assistant (Personal DigitalAssistant, PDA for short), a sales terminal (Point of sales, POS for short), a vehicle-mounted computer, and the like, taking the mobile phone as an example of the terminal:
Fig. 10 is a block diagram showing a part of the structure of a mobile phone related to a terminal device provided in an embodiment of the present application. Referring to fig. 10, the mobile phone includes: radio Frequency (RF) circuit 610, memory 620, input unit 630, display unit 640, sensor 650, audio circuit 660, wireless fidelity (wireless fidelity, wi-Fi) module 670, processor 680, and power supply 690. It will be appreciated by those skilled in the art that the handset construction shown in fig. 10 is not limiting of the handset and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes the components of the mobile phone in detail with reference to fig. 10:
the RF circuit 610 may be configured to receive and transmit signals during a message or a call, and in particular, receive downlink information of a base station and process the downlink information with the processor 680; in addition, the data of the design uplink is sent to the base station. Generally, RF circuitry 610 includes, but is not limited to, antennas, at least one amplifier, transceivers, couplers, low noise amplifiers (English full name: lowNoise Amplifier, english short name: LNA), diplexers, and the like. In addition, the RF circuitry 610 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to global system for mobile communications (english: global System ofMobile communication, english: GSM), general packet radio service (english: general Packet Radio Service, english: GPRS), code division multiple access (english: code DivisionMultipleAccess, CDMA), wideband code division multiple access (english: wideband Code DivisionMultipleAccess, english: WCDMA), long term evolution (english: long Term Evolution, english: LTE), email, short message service (english: short Messaging Service, english: SMS), and the like.
The memory 620 may be used to store software programs and modules, and the processor 680 may perform various functional applications and data processing of the cellular phone by executing the software programs and modules stored in the memory 620. The memory 620 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 620 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 630 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the handset. In particular, the input unit 630 may include a touch panel 631 and other input devices 632. The touch panel 631, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 631 or thereabout using any suitable object or accessory such as a finger, a stylus, etc.), and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 631 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 680 and can receive commands from the processor 680 and execute them. In addition, the touch panel 631 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 630 may include other input devices 632 in addition to the touch panel 631. In particular, other input devices 632 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 640 may be used to display information input by a user or information provided to the user and various menus of the mobile phone. The display unit 640 may include a display panel 641, and optionally, the display panel 641 may be configured in the form of a liquid crystal display (english: liquid Crystal Display, abbreviated as LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 631 may cover the display panel 641, and when the touch panel 631 detects a touch operation thereon or thereabout, the touch panel 631 is transferred to the processor 680 to determine the type of the touch event, and then the processor 680 provides a corresponding visual output on the display panel 641 according to the type of the touch event. Although in fig. 10, the touch panel 631 and the display panel 641 are two independent components to implement the input and input functions of the mobile phone, in some embodiments, the touch panel 631 and the display panel 641 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 650, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 641 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 641 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications of recognizing the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the handset are not described in detail herein.
Audio circuitry 660, speaker 661, microphone 662 may provide an audio interface between a user and the handset. The audio circuit 660 may transmit the received electrical signal converted from audio data to the speaker 661, and the electrical signal is converted into a sound signal by the speaker 661 to be output; on the other hand, microphone 662 converts the collected sound signals into electrical signals, which are received by audio circuit 660 and converted into audio data, which are processed by audio data output processor 680 for transmission to, for example, another cell phone via RF circuit 610, or which are output to memory 620 for further processing.
Wi-Fi belongs to a short-distance wireless transmission technology, and a mobile phone can help a user to send and receive e-mails, browse web pages, access streaming media and the like through a Wi-Fi module 670, so that wireless broadband Internet access is provided for the user. Although fig. 10 shows Wi-Fi module 670, it is understood that it does not belong to the necessary constitution of the mobile phone, and can be omitted entirely as needed within the scope of not changing the essence of the application.
Processor 680 is a control center of the handset, connects various parts of the entire handset using various interfaces and lines, and performs various functions and processes of the handset by running or executing software programs and/or modules stored in memory 620, and invoking data stored in memory 620, thereby performing overall monitoring of the handset. Optionally, processor 680 may include one or more processing units; preferably, the processor 680 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 680.
The handset further includes a power supply 690 (e.g., a battery) for powering the various components, which may be logically connected to the processor 680 through a power management system so as to perform functions such as managing charging, discharging, and power consumption by the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which will not be described herein.
In the embodiment of the present application, the processor 680 included in the mobile phone further has a flowchart for controlling and executing the above method for determining abnormal behavior of the user shown in fig. 2.
Fig. 11 is a schematic diagram of a server structure provided in the embodiments of the present application, where the server 720 may have a relatively large difference due to configuration or performance, and may include one or more central processing units (in english: central processing units, abbreviated as CPU) 722 (e.g., one or more processors) and a memory 732, and one or more storage media 730 (e.g., one or more mass storage devices) storing application programs 742 or data 744. Wherein memory 732 and storage medium 730 may be transitory or persistent. The program stored in the storage medium 730 may include one or more modules (not shown), each of which may include a series of instruction operations on a server. Still further, the central processor 722 may be configured to communicate with the storage medium 730 and execute a series of instruction operations on the storage medium 730 on the server 720.
The Server 720 may also include one or more power supplies 726, one or more wired or wireless network interfaces 750, one or more input/output interfaces 758, and/or one or more operating systems 741, such as Windows Server, mac OS X, unix, linux, freeBSD, and the like.
The steps performed by the server in the above embodiments may be based on the structure of the server 720 shown in fig. 11. The steps of the server shown in fig. 2 in the above embodiment may be based on the server structure shown in fig. 11, for example. For example, the processor 722 may perform the following operations by invoking instructions in the memory 732:
acquiring an image set in a target history period from a plurality of data sources, wherein the image set comprises a plurality of face images to be clustered with space-time information;
clustering the face images to be clustered to obtain a target face file of a target user;
acquiring historical behavior data and operator communication data of the target user in the target historical period of at least one social platform;
generating an actual track of the target user in the target historical period according to the historical behavior data, the operator communication data and the target face file;
And judging whether the target user has abnormal behaviors according to the track to be confirmed and the actual track.
In some embodiments, when the step of generating the actual track of the target user in the target history period according to the historical behavior data, the operator communication data and the target face profile is implemented by the processor 722, the following steps are specifically implemented:
extracting first time-space information from the historical behavior data;
extracting second spatiotemporal information from the operator communication data;
extracting third space-time information from the target face file;
and determining the actual track according to the first time-space information, the second time-space information, the third time-space information, the first weight corresponding to the first time-space information, the second weight corresponding to the second time-space information and the third weight corresponding to the third time-space information, wherein the third weight is greater than the second weight and the second weight is greater than the first weight.
In some embodiments, when the step of determining whether the target user has abnormal behavior according to the track to be confirmed and the actual track is implemented by the processor 722, the following steps are specifically implemented:
Determining the matching degree of the track to be confirmed and the actual track;
if the matching degree is greater than or equal to a preset matching degree threshold value, determining that the target user does not have abnormal behaviors;
if the matching degree is smaller than a preset matching degree threshold value, determining that the target user has abnormal behaviors.
In some embodiments, when the step of clustering the target face image to obtain the target face file is implemented by the processor 722, the following steps are specifically implemented:
determining the face characteristics of the face images to be clustered;
determining target similarity scores between every two face images to be clustered according to the face features;
clustering the face images to be clustered according to the target similarity score to obtain at least one face file;
and determining the target face file from at least one face file according to the target face characteristics of the target user.
In some embodiments, when the step of determining the target similarity score between the face images to be clustered according to the face features is implemented by the processor 722, the following steps are specifically implemented:
performing pairwise comparison processing on the face features to obtain initial similarity scores between face images to be clustered;
And carrying out similarity score remapping processing on the initial similarity score according to a preset similarity score remapping network model to obtain the target similarity score.
In some embodiments, after implementing the step of clustering the face images to be clustered according to the target similarity score to obtain at least one face file, the processor 722 further implements the following steps:
and storing at least one face file into a preset face file library.
In some embodiments, the image set further includes a human body image corresponding to the face image to be clustered, and before implementing the step of clustering the face image to be clustered to obtain the target face profile of the target user, the processor 722 further implements the following steps:
determining a human feature of the human image;
determining the human body image with the human body characteristics conforming to the target human body characteristics as a target human body image;
screening the face images to be clustered according to the target human body image to obtain a target human face image;
and taking the target face image as a face image to be clustered in the image set.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
It should be appreciated that in embodiments of the present application, the processor 722 may be a Central processing unit (Central ProcessingUnit, CPU), and the processor 722 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Those skilled in the art will appreciate that all or part of the flow in a method embodying the above described embodiments may be accomplished by computer programs instructing the relevant hardware. The computer program comprises program instructions, and the computer program can be stored in a storage medium, which is a computer readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present application also provides a storage medium. The storage medium may be a computer readable storage medium. The storage medium stores a computer program, wherein the computer program includes program instructions. The program instructions, when executed by the processor, cause the processor to perform the steps of:
acquiring an image set in a target history period from a plurality of data sources, wherein the image set comprises a plurality of face images to be clustered with space-time information;
clustering the face images to be clustered to obtain a target face file of a target user;
acquiring historical behavior data and operator communication data of the target user in the target historical period of at least one social platform;
generating an actual track of the target user in the target historical period according to the historical behavior data, the operator communication data and the target face file;
and judging whether the target user has abnormal behaviors according to the track to be confirmed and the actual track.
The storage medium may be a U-disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, or other various computer-readable storage media that can store program codes.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the device embodiments described above are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the application can be combined, divided and deleted according to actual needs. In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The integrated unit may be stored in a storage medium if implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a terminal, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. A method for determining abnormal behavior of a user, comprising:
acquiring an image set in a target history period from a plurality of data sources, wherein the image set comprises a plurality of face images to be clustered with space-time information;
clustering the face images to be clustered to obtain a target face file of a target user;
acquiring historical behavior data and operator communication data of the target user in the target historical period of at least one social platform;
generating an actual track of the target user in the target historical period according to the historical behavior data, the operator communication data and the target face file;
judging whether the target user has abnormal behaviors according to the track to be confirmed and the actual track;
the generating an actual track of the target user in the target history period according to the history behavior data, the operator communication data and the target face file includes:
extracting first time-space information from the historical behavior data;
extracting second spatiotemporal information from the operator communication data;
extracting third space-time information from the target face file;
And determining the actual track according to the first time-space information, the second time-space information, the third time-space information, the first weight corresponding to the first time-space information, the second weight corresponding to the second time-space information and the third weight corresponding to the third time-space information, wherein the third weight is greater than the second weight and the second weight is greater than the first weight.
2. The method according to claim 1, wherein said determining whether the target user has abnormal behavior based on the trajectory to be confirmed and the actual trajectory comprises:
determining the matching degree of the track to be confirmed and the actual track;
if the matching degree is greater than or equal to a preset matching degree threshold value, determining that the target user does not have abnormal behaviors;
if the matching degree is smaller than a preset matching degree threshold value, determining that the target user has abnormal behaviors.
3. The method of claim 1, wherein the clustering the target face image to obtain the target face file includes:
determining the face characteristics of the face images to be clustered;
determining target similarity scores between every two face images to be clustered according to the face features;
Clustering the face images to be clustered according to the target similarity score to obtain at least one face file;
and determining the target face file from at least one face file according to the target face characteristics of the target user.
4. A method according to claim 3, wherein determining the target similarity score between the face images to be clustered according to the face features comprises:
performing pairwise comparison processing on the face features to obtain initial similarity scores between face images to be clustered;
obtaining a face similarity adjacency graph through the initial similarity scores between every two face images to be clustered; each node in the face similarity adjacency graph corresponds to a face image to be clustered;
performing similarity score remapping processing on the initial similarity score according to a preset similarity score remapping network model to obtain the target similarity score;
the preset similarity score remapping network model is a graph rolling network model; the step of performing similarity score remapping processing on the initial similarity score according to a preset similarity score remapping network model to obtain the target similarity score comprises the following steps:
Performing similarity score remapping treatment on initial similarity scores among nodes in the face similarity adjacency graph through the graph rolling network model to obtain a face similarity adjacency graph after remapping treatment;
and obtaining the target similarity score according to the face similarity adjacency graph after the remapping treatment.
5. A method according to claim 3, wherein after the clustering of the face images to be clustered according to the target similarity score to obtain at least one face file, the method further comprises:
and storing at least one face file into a preset face file library.
6. The method according to any one of claims 1 to 5, wherein the image set further includes a human body image corresponding to the face image to be clustered, and before the face image to be clustered is clustered to obtain a target face profile of a target user, the method further includes:
determining a human feature of the human image;
determining the human body image with the human body characteristics conforming to the target human body characteristics as a target human body image;
screening the face images to be clustered according to the target human body image to obtain a target human face image;
And taking the target face image as a face image to be clustered in the image set.
7. A user abnormal behavior determination apparatus, characterized by comprising:
the receiving and transmitting module is used for acquiring an image set in a target history period from a plurality of data sources, wherein the image set comprises a plurality of face images to be clustered with space-time information;
the processing module is used for carrying out clustering processing on the face images to be clustered to obtain a target face file of a target user; acquiring historical behavior data and operator communication data of the target user in the target historical period of at least one social platform; generating an actual track of the target user in the target historical period according to the historical behavior data, the operator communication data and the target face file; judging whether the target user has abnormal behaviors according to the track to be confirmed and the actual track;
the processing module is specifically configured to, when executing the step of generating an actual track of the target user in the target history period according to the historical behavior data, the operator communication data, and the target face file:
Extracting first time-space information from the historical behavior data;
extracting second spatiotemporal information from the operator communication data;
extracting third space-time information from the target face file;
and determining the actual track according to the first time-space information, the second time-space information, the third time-space information, the first weight corresponding to the first time-space information, the second weight corresponding to the second time-space information and the third weight corresponding to the third time-space information, wherein the third weight is greater than the second weight and the second weight is greater than the first weight.
8. The device for determining abnormal behavior of a user according to claim 7, wherein the processing module is configured to, when executing the step of determining whether the target user has abnormal behavior according to the trajectory to be confirmed and the actual trajectory, specifically:
determining the matching degree of the track to be confirmed and the actual track;
if the matching degree is greater than or equal to a preset matching degree threshold value, determining that the target user does not have abnormal behaviors;
if the matching degree is smaller than a preset matching degree threshold value, determining that the target user has abnormal behaviors.
9. The apparatus for determining abnormal behavior of a user according to claim 7, wherein the processing module is configured to, when performing the step of clustering the target face image to obtain the target face profile:
determining the face characteristics of the face images to be clustered;
determining target similarity scores between every two face images to be clustered according to the face features;
clustering the face images to be clustered according to the target similarity score to obtain at least one face file;
and determining the target face file from at least one face file according to the target face characteristics of the target user.
10. The device for determining abnormal behavior of a user according to claim 9, wherein the processing module is configured to, when executing the step of determining the target similarity score between the face images to be clustered according to the face features:
performing pairwise comparison processing on the face features to obtain initial similarity scores between face images to be clustered;
obtaining a face similarity adjacency graph through the initial similarity scores between every two face images to be clustered; each node in the face similarity adjacency graph corresponds to a face image to be clustered;
Performing similarity score remapping processing on the initial similarity score according to a preset similarity score remapping network model to obtain the target similarity score;
the preset similarity score remapping network model is a graph rolling network model; the step of performing similarity score remapping processing on the initial similarity score according to a preset similarity score remapping network model to obtain the target similarity score comprises the following steps:
performing similarity score remapping treatment on initial similarity scores among nodes in the face similarity adjacency graph through the graph rolling network model to obtain a face similarity adjacency graph after remapping treatment;
and obtaining the target similarity score according to the face similarity adjacency graph after the remapping treatment.
11. The apparatus for determining abnormal user behavior according to claim 9, wherein the processing module is further configured to, after performing the step of clustering the face images to be clustered according to the target similarity score to obtain at least one face profile:
and storing at least one face file into a preset face file library.
12. The apparatus for determining abnormal user behavior according to any one of claims 7 to 11, wherein the image set further includes a human body image corresponding to the face image to be clustered, and the processing module is further configured to, before executing the step of clustering the face image to be clustered to obtain a target face profile of a target user:
determining a human feature of the human image;
determining the human body image with the human body characteristics conforming to the target human body characteristics as a target human body image;
screening the face images to be clustered according to the target human body image to obtain a target human face image;
and taking the target face image as a face image to be clustered in the image set.
13. A computer device, characterized in that it comprises a memory on which a computer program is stored and a processor which, when executing the computer program, implements the method according to any of claims 1-6.
14. A computer readable storage medium, characterized in that the storage medium stores a computer program comprising program instructions which, when executed by a processor, can implement the method of any of claims 1-6.
CN202211065727.9A 2022-09-01 2022-09-01 User abnormal behavior determination method, device, computer equipment and storage medium Active CN115376192B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211065727.9A CN115376192B (en) 2022-09-01 2022-09-01 User abnormal behavior determination method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211065727.9A CN115376192B (en) 2022-09-01 2022-09-01 User abnormal behavior determination method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115376192A CN115376192A (en) 2022-11-22
CN115376192B true CN115376192B (en) 2024-01-30

Family

ID=84069666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211065727.9A Active CN115376192B (en) 2022-09-01 2022-09-01 User abnormal behavior determination method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115376192B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104426937A (en) * 2013-08-26 2015-03-18 何愈 Method for locating content recorded by other equipment through mobile phone and cloud computation
CN114357216A (en) * 2021-12-10 2022-04-15 浙江大华技术股份有限公司 Portrait gathering method and device, electronic equipment and storage medium
CN114491148A (en) * 2022-04-14 2022-05-13 武汉中科通达高新技术股份有限公司 Target person searching method and device, computer equipment and storage medium
CN114898420A (en) * 2022-03-29 2022-08-12 浙江大华技术股份有限公司 Abnormal face archive identification method and device, electronic device and storage medium
CN114973352A (en) * 2022-03-31 2022-08-30 北京瑞莱智慧科技有限公司 Face recognition method, device, equipment and storage medium
CN114973351A (en) * 2022-03-31 2022-08-30 北京瑞莱智慧科技有限公司 Face recognition method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104662527A (en) * 2012-08-10 2015-05-27 诺基亚公司 Method and apparatus for providing crowd-sourced geocoding

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104426937A (en) * 2013-08-26 2015-03-18 何愈 Method for locating content recorded by other equipment through mobile phone and cloud computation
CN114357216A (en) * 2021-12-10 2022-04-15 浙江大华技术股份有限公司 Portrait gathering method and device, electronic equipment and storage medium
CN114898420A (en) * 2022-03-29 2022-08-12 浙江大华技术股份有限公司 Abnormal face archive identification method and device, electronic device and storage medium
CN114973352A (en) * 2022-03-31 2022-08-30 北京瑞莱智慧科技有限公司 Face recognition method, device, equipment and storage medium
CN114973351A (en) * 2022-03-31 2022-08-30 北京瑞莱智慧科技有限公司 Face recognition method, device, equipment and storage medium
CN114491148A (en) * 2022-04-14 2022-05-13 武汉中科通达高新技术股份有限公司 Target person searching method and device, computer equipment and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Research on the innovation of trajectory big data in social Governance;Bin Zhang等;《Proceedings of the 6th ACM/ACIS International Conference on Applied Computing and Information Technology》;第38-42页 *
位置轨迹数据挖掘在公安工作中的应用;童智峰;《中国优秀硕士学位论文全文数据库社会科学I辑》;第1.2节、第4.1节 *
基于位置的移动社交网络中签到轨迹隐私保护;刘跃遥;《万方学位论文》;第1-58页 *
基于手机LBS位置服务的社交网络分析;马强;付艳茹;;宁波职业技术学院学报(第04期);第98-102页 *
基于手机轨迹数据的人口流动分析;孔扬鑫;金澈清;王晓玲;;计算机应用(第01期);第50-57页 *

Also Published As

Publication number Publication date
CN115376192A (en) 2022-11-22

Similar Documents

Publication Publication Date Title
US11290447B2 (en) Face verification method and device
CN110704661B (en) Image classification method and device
CN114973351B (en) Face recognition method, device, equipment and storage medium
CN114595124B (en) Time sequence abnormity detection model evaluation method, related device and storage medium
CN114694226B (en) Face recognition method, system and storage medium
CN115022098A (en) Artificial intelligence safety target range content recommendation method, device and storage medium
CN112995757B (en) Video clipping method and device
CN116778306A (en) Fake object detection method, related device and storage medium
CN114821751B (en) Image recognition method, device, system and storage medium
CN115640567B (en) TEE integrity authentication method, device, system and storage medium
CN115376192B (en) User abnormal behavior determination method, device, computer equipment and storage medium
CN116071614A (en) Sample data processing method, related device and storage medium
CN115546516A (en) Personnel gathering method and device, computer equipment and storage medium
CN115062197A (en) Attendance data detection method and device and storage medium
CN114973352A (en) Face recognition method, device, equipment and storage medium
CN115412726B (en) Video authenticity detection method, device and storage medium
CN115565215B (en) Face recognition algorithm switching method and device and storage medium
CN116386647B (en) Audio verification method, related device, storage medium and program product
CN115525554B (en) Automatic test method, system and storage medium for model
CN116363490A (en) Fake object detection method, related device and storage medium
CN115050079B (en) Face recognition method, device and storage medium
CN116257657B (en) Data processing method, data query method, related device and storage medium
CN117252983A (en) Object reconstruction method, device, computer equipment and storage medium
CN116167274A (en) Simulation combat attack and defense training method, related device and storage medium
CN116244071A (en) Resource adjustment method, related equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant