CN111191601B - Method, device, server and storage medium for identifying peer users - Google Patents

Method, device, server and storage medium for identifying peer users Download PDF

Info

Publication number
CN111191601B
CN111191601B CN201911405607.7A CN201911405607A CN111191601B CN 111191601 B CN111191601 B CN 111191601B CN 201911405607 A CN201911405607 A CN 201911405607A CN 111191601 B CN111191601 B CN 111191601B
Authority
CN
China
Prior art keywords
peer
user
target
users
preselected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911405607.7A
Other languages
Chinese (zh)
Other versions
CN111191601A (en
Inventor
李景皓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN201911405607.7A priority Critical patent/CN111191601B/en
Publication of CN111191601A publication Critical patent/CN111191601A/en
Application granted granted Critical
Publication of CN111191601B publication Critical patent/CN111191601B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Transfer Between Computers (AREA)
  • Computer And Data Communications (AREA)

Abstract

The application is applicable to the technical field of computers, and provides a peer user identification method, which comprises the following steps: acquiring a first target image shot by each camera in a first preset time period, determining a first synchronous number of a first target user and a second target user, and determining target synchronous users of the first target user according to the first synchronous number. The method realizes the analysis of the same-line relationship by the same-line times, and does not need to collect massive sample data when the analysis of the same-line relationship is performed by the same-line times, thereby improving the efficiency of the analysis of the same-line relationship.

Description

Method, device, server and storage medium for identifying peer users
Technical Field
The application belongs to the technical field of computers, and particularly relates to a peer user identification method, a peer user identification device, a server and a storage medium.
Background
With the enhancement of people's understanding and the continuous development of computer technology, people find that the peers often have some similar characteristics, and if peer users can be accurately analyzed, good data basis can be provided for user classification. At present, a method for analyzing the users in the same line of users needs to determine the same line time of the users according to massive sample data, and further the same line users are determined according to the same line time, so that the problem of low analysis efficiency exists.
Disclosure of Invention
In view of this, the embodiments of the present application provide a method, an apparatus, a server, and a storage medium for identifying peer users, so as to solve the problem in the prior art that identification of peer users needs to rely on massive sample data, and the identification efficiency is low.
A first aspect of an embodiment of the present application provides a peer user identification method, including:
acquiring a first target image shot by each camera in a first preset time period;
determining a first synchronization number of a first target user and a second target user according to the first target image, wherein the first target user is a user with the total number of times in all the first target images being greater than a preset number of times threshold, the second target user is a user with the total number of times in all the first target images being greater than the preset number of times threshold together with the first target user, and the first synchronization number is the number of cameras through which the first target user and the second target user pass together;
and determining target peer users of the first target user according to the first peer number.
In an optional implementation manner, the determining, according to the first synchronization number, the target peer user of the first target user includes:
Determining a second target user with the first concurrent times greater than a preset times threshold as a preselected concurrent user of the first target user;
constructing a first synchronous number matrix based on the first synchronous numbers of the first target user and each preselected same-row user;
mapping the first concurrent sub-number matrix into a concurrent weight matrix of each pre-selected concurrent user and the first target user based on a preset mapping rule;
and determining the target peer user of the first target user from the preselected peer users based on the peer weight matrix.
In an optional implementation manner, the mapping the first peer rank matrix to the peer weight matrix of each pre-selected peer user and the first target user based on a preset mapping rule includes:
and according to a preset mapping relation between the first same-line times and the same-line weight values, mapping the first target user and the first same-line times of each preselected same-line user in the first same-line time matrix into corresponding same-line weight values to obtain the same-line weight matrix.
In an alternative implementation, before the determining, based on the peer weight matrix, the target peer user of the first target user from the preselected peer users, the method includes:
Dividing the first preset time period into a preset number of second preset time periods;
determining a second target image shot by each camera in the second preset time period from the first target image;
determining second peer times of the first target user and each pre-selected peer user in the second preset time period according to the second target image;
and respectively constructing a second same-line frequency matrix corresponding to each second preset time period according to the second same-line frequency of the first target user corresponding to each second preset time period and each pre-selected same-line user.
In an optional implementation manner, the determining, based on the peer weight matrix, the target peer user of the first target user from the preselected peer users includes:
analyzing the second peer-to-peer frequency matrix based on a preset association graph analysis method to obtain peer-to-peer probability values of the preselected peer users and the first target users;
and determining the target peer users of the first target user from the preselected peer users according to the peer probability value and the peer weight matrix.
In an optional implementation manner, the analyzing the second peer-to-peer number matrix based on the preset association graph analysis method to obtain peer probability values of the preselected peer users and the first target user includes:
constructing a time sequence association map of the second same-row frequency matrix;
determining a change rule of the second peer-to-peer times according to the time sequence association map;
and determining the peer probability value of each preselected peer user and the first target user according to the change rule of the second peer times.
In an optional implementation manner, the determining, according to the peer probability value and the peer weight matrix, the target peer user of the first target user from the preselected peer users includes:
respectively calculating the weighted probability value of each preselected peer user according to the peer probability value and the peer weight matrix;
and identifying the peer user of the first target user from the preselected peer users based on the weighted probability values.
A second aspect of the embodiments of the present application provides a peer user identification device, including:
the acquisition module is used for acquiring first target images shot by each camera in a first preset time period;
The first determining module is used for determining a first synchronous number of first target users and second target users according to the first target images, wherein the first target users are users with total times which appear in all the first target images being greater than a preset time threshold, the second target users are users with total times which appear in all the first target images together with the first target users being greater than the preset time threshold, and the first synchronous number is the number of cameras which the first target users and the second target users pass together;
and the second determining module is used for determining the target peer users of the first target user according to the first peer times.
In an alternative implementation, the second determining module includes:
a first determining unit, configured to determine, as a preselected peer user of the first target user, a second target user whose first peer number is greater than a preset number of times threshold;
a first construction unit, configured to construct a first parallel sub-number matrix based on first parallel sub-numbers of the first target user and each of the preselected peer users;
the mapping unit is used for mapping the first synchronous sub-number matrix into a same-row weight matrix of each pre-selected same-row user and the first target user based on a preset mapping rule;
And the second determining unit is used for determining the target peer users of the first target user from the preselected peer users based on the peer weight matrix.
In an alternative implementation, the mapping unit is specifically configured to:
and according to a preset mapping relation between the first same-line times and the same-line weight values, mapping the first target user and the first same-line times of each preselected same-line user in the first same-line time matrix into corresponding same-line weight values to obtain the same-line weight matrix.
In an alternative implementation manner, the second determining module further includes:
a dividing unit for dividing the first preset time period into a preset number of second preset time periods;
a third determining unit, configured to determine, from the first target image, a second target image captured by each camera within the second preset time period;
a fourth determining unit, configured to determine, according to the second target image, a second peer number of times of the first target user and each of the preselected peer users in the second preset time period;
the second construction unit is used for respectively constructing a second same-line number matrix corresponding to each second preset time period according to the second same-line numbers of the first target user corresponding to each second preset time period and each pre-selected same-line user.
In an alternative implementation, the second determining subunit includes:
the obtaining subunit is used for analyzing the second peer-to-peer frequency matrix based on a preset association graph analysis method to obtain peer-to-peer probability values of the preselected peer users and the first target users;
and the first determination subunit is used for determining the target peer users of the first target user from the preselected peer users according to the peer probability value and the peer weight matrix.
In an alternative implementation, the obtaining subunit includes:
the construction subunit is used for constructing a time sequence association map of the second same-row time matrix;
the second determining subunit is used for determining the change rule of the second same-row times according to the time sequence association diagram;
and the third determining subunit is used for determining the peer probability value of each preselected peer user and the first target user according to the change rule of the second peer times.
In an alternative implementation, the first determining subunit includes:
the calculating subunit is used for calculating the weighted probability value of each preselected peer user according to the peer probability value and the peer weight matrix;
And the identification subunit is used for identifying the peer user of the first target user from the preselected peer users based on the weighted probability value.
Compared with the method for identifying the same-line users by adopting the same-line time to conduct the analysis of the same-line relationship in the prior art, the method for identifying the same-line users in the first aspect of the embodiment of the application determines the first same-line times of the first target users and the second target users through the first target images shot by the cameras, and determines the target same-line users of the first target users according to the first same-line times. The method and the system realize the analysis of the same-line relationship through the same-line times, and when the analysis of the same-line relationship is carried out through the same-line times, the preselected same-line users of the first target user are determined from the second target user, and the target same-line users of the first target user are determined from the preselected same-line users according to the same-line weight matrixes of the first target user and the preselected same-line users, so that massive sample data do not need to be acquired, and the efficiency of the analysis of the same-line relationship is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of an implementation of a peer user identification method provided in a first embodiment of the present application;
FIG. 2 is a flowchart showing the implementation of S103 in FIG. 1 in the first embodiment;
FIG. 3 is a flow chart of a specific implementation of S1034 in FIG. 2;
FIG. 4 is a flowchart of a specific implementation of S301 in FIG. 3;
FIG. 5 is a flowchart of a specific implementation of S302 in FIG. 3;
FIG. 6 is a flowchart showing the implementation of S103 in FIG. 1 in the second embodiment;
fig. 7 is a schematic structural diagram of a peer user identification device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be noted that, at present, when the analysis of the peer relationship is performed, the analysis is generally performed according to the time relationship, and when the analysis is performed according to the time relationship, in order to ensure the accuracy of the analysis result, it is necessary to collect mass data in advance and perform the classification analysis on the collected mass data by adopting a time window. In this case, it takes a long time to acquire and analyze the mass data, and a lot of resources are wasted.
If analysis is performed only by data collected in a certain period (for example, half an hour), although the analysis efficiency can be improved, the peer data of peer users under the fixed cameras in a certain period may be only once, and the accuracy of the analysis result cannot be ensured.
In view of this, the present application proposes a scheme for judging the users of the same row by the number of times of the same row. In order to illustrate the technical solutions described in the present application, the following description is made by specific examples. As shown in fig. 1, a flowchart of an implementation of a peer user identification method according to a first embodiment of the present application is shown, where the embodiment may be implemented by hardware or software of a peer user identification device, where the peer user identification device is a computing device with a computing function, for example, a server. The details are as follows:
s101, acquiring first target images shot by each camera in a first preset time period.
It can be appreciated that each camera is generally located in the same preset area, and all the cameras have a certain spatial position relationship, and the first target image is an image, which is captured by all the cameras in the first preset time period and includes a target object passing through the preset area.
S102, determining a first synchronization number of a first target user and a second target user according to the first target image, wherein the first target user is a user with the total number of times in all the first target images being greater than a preset number of times threshold, the second target user is a user with the total number of times in all the first target images being greater than the preset number of times threshold together with the first target user, and the first synchronization number is the number of cameras which the first target user and the second target user pass together.
In general, there may be a plurality of users passing through the preset area within the first preset time period, but the peer users will commonly pass through a plurality of cameras associated in spatial positions, so it may be determined whether the first target user and the second target user are peer users by the number of cameras that the first target user and the second target user commonly pass through. In this embodiment, the first target user is a user whose total number of times appearing in all the first target images is greater than a preset number of times threshold, and the second target user is a user whose total number of times appearing in all the first target images together with the first target user is greater than the preset number of times threshold. It should be noted that the preset number of times threshold may be determined according to existing peer data, for example, according to existing peer data, the number of cameras that the peer users commonly pass in the first preset time period may be determined, and the number of cameras that commonly pass may be used as the preset number of times threshold.
And defining the number of cameras which the first target user and the second target user pass through together as a first same-line number, and determining the first same-line number of the first target user and the second target user according to the first target image.
S103, determining target peer users of the first target user according to the first peer number.
It may be appreciated that if there is a second target user that is in line with the first target user, the second target user and the first target user at least pass through cameras that are correlated in spatial positions together, and in this embodiment, the target in-line user of the first target user is determined by comparing the relationship between the first in-line number and the preset number threshold with a preset number threshold.
As can be seen from the above analysis, in the peer user identification method provided in the embodiment of the present application, a first peer number of first target users and second target users is determined according to the first target images captured by the cameras, and target peer users of the first target users are determined according to the first peer number. The method realizes the analysis of the same-line relationship by the same-line times, and does not need to collect massive sample data when the analysis of the same-line relationship is performed by the same-line times, thereby improving the efficiency of the analysis of the same-line relationship.
In an alternative implementation, as shown in fig. 2, a flowchart is shown for implementing S103 in fig. 1 in the first embodiment. As can be seen from fig. 2, in the present embodiment, S103 includes S1031 to S1034. The details are as follows:
s1031, determining the second target user with the first synchronous number larger than a preset number of times threshold as a preselected synchronous user of the first target user.
It can be understood that, in the actual application scenario, the preset number of times threshold may also be obtained according to an empirical value and a large number of experimental tests, and the peer users are determined according to the relationship between the first peer times and the preset number of times threshold, which may possibly cause a misjudgment phenomenon. Therefore, in this embodiment, the second target user whose first synchronization number is greater than the preset number of times threshold is defined as a preselected peer user of the first target user, and further, the target peer user is determined from the preselected peer users.
S1032, constructing a first synchronous sub-number matrix based on the first synchronous sub-numbers of the first target user and each pre-selected same-line user.
The same-row number matrix is a matrix representing the number of cameras through which each of the preselected same-row users and the first target user commonly pass. By constructing the first co-ordination matrix, the number of cameras jointly passed by each pre-selected co-ordination user and the first target user can be clearly observed, and whether the pre-selected co-ordination user and the first target user are target co-ordination users or not is determined based on the number of cameras jointly passed.
S1033, mapping the first synchronous sub-number matrix into a same-row weight matrix of each pre-selected same-row user and the first target user based on a preset mapping rule.
It will be appreciated that the greater the number of first siblings, the higher the probability of the corresponding siblings is explained. In this embodiment, a mapping relationship between the first peer number and the peer weight value is pre-established, specifically, according to existing peer data, an appropriate camera mapping function may be analyzed, for example, preset peer weight values are respectively given to peer users whose peer number (the number of cameras passing together) is greater than a preset number threshold in a first preset time period, so as to increase the weight for predicting the peer relationship of two persons, where, it is understood that an upper limit of peer weight values may also be set, and after the peer weight value corresponding to the first peer number of peer users is greater than a certain threshold, all peer weight values corresponding to the first peer number are fixed peer weight values.
Further, the first co-rank matrix is mapped to a co-rank weight matrix for each of the preselected co-rank users and the first target user.
Specifically, S1033 includes: and according to a preset mapping relation between the first same-line times and the same-line weight values, mapping the first target user and the first same-line times of each preselected same-line user in the first same-line time matrix into corresponding same-line weight values to obtain the same-line weight matrix.
S1034, determining the target peer users of the first target user from the preselected peer users based on the peer weight matrix.
It may be appreciated that the peer weight value is a weight value for measuring a peer probability value, and when the first peer number is greater, the peer weight value is correspondingly higher. In this embodiment, in order to improve the calculation efficiency on the basis of ensuring the accuracy of the calculation result, a threshold value of the peer weight value is preset, that is, when the first peer count reaches a certain number, the corresponding peer weight values are all the peer weight value threshold values.
From the analysis, the embodiment determines a pre-selected peer user from the second target users, further constructs a peer weight matrix of the pre-selected peer user and the first target user, and determines the target peer user of the first target user from the pre-selected peer users according to the peer weight matrix. The accuracy of peer user identification can be further improved.
In an alternative implementation, as shown in fig. 3, there is a specific implementation flowchart of S1034 in fig. 2. As can be seen from fig. 3, S1034 includes S301 to S302. The details are as follows:
s301, analyzing a second peer-to-peer frequency matrix based on a preset association graph analysis method to obtain peer-to-peer probability values of the preselected peer users and the first target users.
Specifically, the second same-row number matrix is a second same-row number matrix which divides the first preset time period into a preset number of second preset time periods, and is constructed according to second same-row numbers of the first target user and each pre-selected same-row user in each second preset time period, wherein the second same-row number matrix corresponds to each constructed second preset time period.
The preset association graph analysis method may be selected according to actual situations, for example, an analysis method based on association degrees, an analysis method based on centrality degrees, or an analysis method based on clustering, etc., and in this embodiment, the analysis method based on association degrees is taken as an example, as shown in fig. 4, and is a specific implementation flowchart of S301 in fig. 3.
As can be seen from fig. 4, in the present embodiment, S301 includes S3011 to S3013. The details are as follows:
S3011, constructing a time sequence association map of the second same-row time matrix.
It should be noted that, since each second preset time period is a continuous time period, the time sequence of the second same-row times is determined according to the time sequence between each second preset time period, and a time sequence association map of a second same-row times matrix is constructed.
S3012, determining the change rule of the second peer-to-peer times according to the time sequence association map.
It can be understood that the time sequence association map is used for representing the association degree of all the second same-row times in time and space, and the change rule of the second same-row times can be determined according to the association degree of all the second same-row times in time and space.
S3013, determining the peer probability value of each preselected peer user and the first target user according to the change rule of the second peer times.
It can be understood that there is a proportional relationship between the peer probability value and the change rule, and a regularly changing ratio constant is allocated to each second peer number according to the change rule of the second peer number, and each second peer number is multiplied by a corresponding ratio constant, so that peer probability values of the preselected peer users and the first target users can be obtained.
S302, determining the target peer users of the first target user from the preselected peer users according to the peer probability values and the peer weight matrix.
It may be appreciated that each peer probability value is used to represent a probability that the preselected peer user is peer to the first target user in each second preset time period, and the peer probability value and the peer weight matrix are weighted and summed, so that the target peer user of the first target user may be determined from the preselected peer users according to the weighted and summed result.
As can be seen from the above analysis, in this embodiment, a timing sequence association map of a second same-row number matrix is constructed, and a change rule of the second same-row number is analyzed based on the timing sequence association map, so as to further determine a same-row probability value according to the change rule; and combining the peer probability value with the peer weight matrix, and determining the target peer user of the first target user from the preselected peer users. The accuracy of peer user identification can be further improved.
In an alternative implementation, as shown in fig. 5, there is a flowchart of a specific implementation of S302 in fig. 3. As can be seen from fig. 5, S302 includes S3021 to S3022. The details are as follows:
S3021, calculating weighted probability values of the preselected peer users according to the peer probability values and the peer weight matrix.
And respectively carrying out weighted summation on the peer probability values and the peer weight matrix to obtain weighted probability values of the preselected peer users.
And S3022, identifying the peer user of the first target user from the preselected peer users based on the weighted probability values.
It will be appreciated that the greater the weighted probability value, the higher the probability that the preselected peer user is the first target peer user. In this embodiment, a weighted probability threshold is preset, when the weighted probability value of the preselected peer user is greater than the weighted probability threshold, the preselected peer user is determined to be the peer user of the first target user, and when the weighted probability value of the preselected peer user is less than or equal to the weighted probability threshold, the preselected peer user is determined to be not the peer user of the first target user.
In an alternative implementation, as shown in fig. 6, a flowchart is shown for a specific implementation of S103 in fig. 1 in the second embodiment. As can be seen from fig. 6, the specific implementation procedures of S601 to S603, S1031 to S1033, and S608 and S1034 are the same as those of the embodiment shown in fig. 2, except that S604 to S607 are further included before S608. S604 and S603 are parallel execution relations.
The details are as follows:
s604, dividing the first preset time period into a preset number of second preset time periods.
It will be appreciated that, in general, the first preset period is a longer period, for example, a period in units of hours, and the peer users are determined according to the peer times acquired in the period in units of hours, so that misjudgment may occur. Thus, in the present embodiment, the first preset time period is divided into a preset number of second preset time periods, which may be time periods in minutes.
S605, determining second target images shot by all cameras in the second preset time period from the first target images.
S606, determining second peer times of the first target user and each pre-selected peer user in the second preset time period according to the second target image.
S607, respectively constructing a second same-line number matrix corresponding to each second preset time period according to the second same-line numbers of the first target user corresponding to each second preset time period and each pre-selected same-line user.
It can be appreciated that the process of constructing the second co-row sub-matrix is the same as the process of constructing the first co-row sub-matrix, and will not be described in detail herein.
As can be seen from the above analysis, in this embodiment, the first preset time period is divided into a preset number of second preset time periods, and a second co-row number matrix corresponding to each of the second preset time periods is constructed. A data basis can be provided for analyzing peer users further based on the second peer count matrix.
Fig. 7 is a schematic structural diagram of a peer user identification device according to an embodiment of the present application. As can be seen from fig. 7, the peer user identification device 7 provided in the embodiment of the present application includes: an acquisition module 701, a first determination module 702 and a second determination module 703. The details are as follows:
an acquiring module 701, configured to acquire a first target image captured by each camera within a first preset period of time;
a first determining module 702, configured to determine, according to the first target image, a first number of concurrent times of a first target user and a second target user, where the first target user is a user whose total number of times appearing in all the first target images is greater than a preset number of times threshold, and the second target user is a user whose total number of times commonly appearing in all the first target images with the first target user is greater than the preset number of times threshold, and the first number of concurrent times is a number of cameras through which the first target user and the second target user commonly pass;
A second determining module 703, configured to determine a target peer user of the first target user according to the first peer number.
In an alternative implementation, the second determining module 703 includes:
a first determining unit, configured to determine, as a preselected peer user of the first target user, a second target user whose first peer number is greater than a preset number of times threshold;
a first construction unit, configured to construct a first parallel sub-number matrix based on first parallel sub-numbers of the first target user and each of the preselected peer users;
the mapping unit is used for mapping the first synchronous sub-number matrix into a same-row weight matrix of each pre-selected same-row user and the first target user based on a preset mapping rule;
and the second determining unit is used for determining the target peer users of the first target user from the preselected peer users based on the peer weight matrix.
In an alternative implementation, the mapping unit is specifically configured to:
and according to a preset mapping relation between the first same-line times and the same-line weight values, mapping the first target user and the first same-line times of each preselected same-line user in the first same-line time matrix into corresponding same-line weight values to obtain the same-line weight matrix.
In an alternative implementation, the second determining module 703 further includes:
a dividing unit for dividing the first preset time period into a preset number of second preset time periods;
a third determining unit, configured to determine, from the first target image, a second target image captured by each camera within the second preset time period;
a fourth determining unit, configured to determine, according to the second target image, a second peer number of times of the first target user and each of the preselected peer users in the second preset time period;
the second construction unit is used for respectively constructing a second same-line number matrix corresponding to each second preset time period according to the second same-line numbers of the first target user corresponding to each second preset time period and each pre-selected same-line user.
In an alternative implementation, the second determining subunit includes:
the obtaining subunit is used for analyzing the second peer-to-peer frequency matrix based on a preset association graph analysis method to obtain peer-to-peer probability values of the preselected peer users and the first target users;
and the first determination subunit is used for determining the target peer users of the first target user from the preselected peer users according to the peer probability value and the peer weight matrix.
In an alternative implementation, the obtaining subunit includes:
the construction subunit is used for constructing a time sequence association map of the second same-row time matrix;
the second determining subunit is used for determining the change rule of the second same-row times according to the time sequence association diagram;
and the third determining subunit is used for determining the peer probability value of each preselected peer user and the first target user according to the change rule of the second peer times.
In an alternative implementation, the first determining subunit includes:
the calculating subunit is used for calculating the weighted probability value of each preselected peer user according to the peer probability value and the peer weight matrix;
and the identification subunit is used for identifying the peer user of the first target user from the preselected peer users based on the weighted probability value.
Fig. 8 is a schematic structural diagram of a server according to an embodiment of the present application. As shown in fig. 8, the server 8 of this embodiment includes: a processor 80, a memory 81 and a computer program 82, such as a peer user identification program, stored in the memory 81 and executable on the processor 80. The steps of the various peer user identification method embodiments described above, such as steps 101 through 103 shown in fig. 1, are implemented when the processor 80 executes the computer program 82.
By way of example, the computer program 82 may be partitioned into one or more modules/units that are stored in the memory 81 and executed by the processor 80 to complete the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 82 in the server 8. For example, the computer program 82 may be divided into an acquisition module, a first determination module, and a second determination module (a module in the virtual device), each of which functions specifically as follows:
the acquisition module is used for acquiring first target images shot by each camera in a first preset time period;
the first determining module is used for determining a first synchronization times of a first target user and a second target user according to the first target image, wherein the synchronization times are the number of cameras which the first target user and the second target user commonly pass through;
and the second determining module is used for determining the target peer users of the first target user according to the first peer times.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of communication units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each method embodiment described above. . Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A peer user identification method, comprising:
acquiring a first target image shot by each camera in a first preset time period;
determining a first synchronization number of a first target user and a second target user according to the first target image, wherein the first target user is a user with the total number of times in all the first target images being greater than a preset number of times threshold, the second target user is a user with the total number of times in all the first target images being greater than the preset number of times threshold together with the first target user, and the first synchronization number is the number of cameras through which the first target user and the second target user pass together;
And constructing a peer-to-peer weight matrix of the first target user and a preselected peer-to-peer user based on the first peer-to-peer number, and determining the target peer-to-peer user of the first target user according to the peer-to-peer weight matrix, wherein the preselected peer-to-peer user is the second target user with the first peer-to-peer number greater than a preset number threshold.
2. The peer user identification method as claimed in claim 1, wherein the constructing a peer weight matrix of the first target user and the second target user based on the first peer count, and determining the target peer user of the first target user according to the peer weight matrix, includes:
determining a second target user with the first concurrent times greater than a preset times threshold as a preselected concurrent user of the first target user;
constructing a first synchronous number matrix based on the first synchronous numbers of the first target user and each preselected same-row user;
mapping the first concurrent sub-number matrix into a concurrent weight matrix of each pre-selected concurrent user and the first target user based on a preset mapping rule;
and determining the target peer user of the first target user from the preselected peer users based on the peer weight matrix.
3. The peer user identification method as claimed in claim 2, wherein said mapping the first peer count matrix to a peer weight matrix for each of the preselected peer users and the first target user based on a preset mapping rule includes:
and according to a preset mapping relation between the first same-line times and the same-line weight values, mapping the first target user and the first same-line times of each preselected same-line user in the first same-line time matrix into corresponding same-line weight values to obtain the same-line weight matrix.
4. The peer user identification method as claimed in claim 2, comprising, prior to said determining a target peer user of said first target user from said preselected peer users based on said peer weight matrix:
dividing the first preset time period into a preset number of second preset time periods;
determining a second target image shot by each camera in the second preset time period from the first target image;
determining second peer times of the first target user and each pre-selected peer user in the second preset time period according to the second target image;
And respectively constructing a second same-line frequency matrix corresponding to each second preset time period according to the second same-line frequency of the first target user corresponding to each second preset time period and each pre-selected same-line user.
5. The peer user identification method as claimed in claim 4, wherein said determining a target peer user of said first target user from said preselected peer users based on said peer weight matrix comprises:
analyzing the second peer-to-peer frequency matrix based on a preset association graph analysis method to obtain peer-to-peer probability values of the preselected peer users and the first target users;
and determining the target peer users of the first target user from the preselected peer users according to the peer probability value and the peer weight matrix.
6. The peer user identification method as claimed in claim 5, wherein the analyzing the second peer number matrix based on a preset association map analysis method to obtain peer probability values of each of the preselected peer users and the first target user includes:
constructing a time sequence association map of the second same-row frequency matrix;
Determining a change rule of the second peer-to-peer times according to the time sequence association map;
and determining the peer probability value of each preselected peer user and the first target user according to the change rule of the second peer times.
7. The method for identifying peer users according to claim 5 or 6, wherein said determining a target peer user of said first target user from said preselected peer users based on said peer probability values and said peer weight matrix comprises:
respectively calculating the weighted probability value of each preselected peer user according to the peer probability value and the peer weight matrix;
and identifying the peer user of the first target user from the preselected peer users based on the weighted probability values.
8. A peer user identification device, comprising:
the acquisition module is used for acquiring first target images shot by each camera in a first preset time period;
the first determining module is used for determining a first synchronous number of first target users and second target users according to the first target images, wherein the first target users are users with total times which appear in all the first target images being greater than a preset time threshold, the second target users are users with total times which appear in all the first target images together with the first target users being greater than the preset time threshold, and the first synchronous number is the number of cameras which the first target users and the second target users pass together;
And the second determining module is used for constructing a peer-to-peer weight matrix of the first target user and a preselected peer-to-peer user based on the first peer-to-peer number, determining the target peer-to-peer user of the first target user according to the peer-to-peer weight matrix, and the preselected peer-to-peer user is the second target user with the first peer-to-peer number larger than a preset number threshold.
9. A server comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the peer user identification method as claimed in any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the peer user identification method according to any of claims 1 to 7.
CN201911405607.7A 2019-12-31 2019-12-31 Method, device, server and storage medium for identifying peer users Active CN111191601B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911405607.7A CN111191601B (en) 2019-12-31 2019-12-31 Method, device, server and storage medium for identifying peer users

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911405607.7A CN111191601B (en) 2019-12-31 2019-12-31 Method, device, server and storage medium for identifying peer users

Publications (2)

Publication Number Publication Date
CN111191601A CN111191601A (en) 2020-05-22
CN111191601B true CN111191601B (en) 2023-05-12

Family

ID=70710472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911405607.7A Active CN111191601B (en) 2019-12-31 2019-12-31 Method, device, server and storage medium for identifying peer users

Country Status (1)

Country Link
CN (1) CN111191601B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112559583B (en) * 2020-11-30 2023-09-01 杭州海康威视数字技术股份有限公司 Method and device for identifying pedestrians
CN112836599A (en) * 2021-01-19 2021-05-25 东方网力科技股份有限公司 Method, device and equipment for querying fellow persons based on face snapshot data
CN112965978B (en) * 2021-03-10 2024-02-09 中国民航信息网络股份有限公司 Method and device for confirming relationship between passengers and pedestrians, electronic equipment and storage medium
CN113449180B (en) * 2021-05-07 2022-10-28 浙江大华技术股份有限公司 Method and device for analyzing peer relationship and computer readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210335A (en) * 2019-05-16 2019-09-06 上海工程技术大学 A kind of training method, system and the device of pedestrian's weight identification learning model

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106126597A (en) * 2016-06-20 2016-11-16 乐视控股(北京)有限公司 User property Forecasting Methodology and device
CN108108662B (en) * 2017-11-24 2021-05-25 深圳市华尊科技股份有限公司 Deep neural network recognition model and recognition method
CN108229335A (en) * 2017-12-12 2018-06-29 深圳市商汤科技有限公司 It is associated with face identification method and device, electronic equipment, storage medium, program
CN109766786B (en) * 2018-12-21 2020-10-23 深圳云天励飞技术有限公司 Character relation analysis method and related product
CN109784199B (en) * 2018-12-21 2020-11-24 深圳云天励飞技术有限公司 Peer-to-peer analysis method and related product
CN110414432B (en) * 2019-07-29 2023-05-16 腾讯科技(深圳)有限公司 Training method of object recognition model, object recognition method and corresponding device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210335A (en) * 2019-05-16 2019-09-06 上海工程技术大学 A kind of training method, system and the device of pedestrian's weight identification learning model

Also Published As

Publication number Publication date
CN111191601A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN111191601B (en) Method, device, server and storage medium for identifying peer users
CN111614690B (en) Abnormal behavior detection method and device
CN110941978B (en) Face clustering method and device for unidentified personnel and storage medium
US20200204576A1 (en) Automated determination of relative asset importance in an enterprise system
CN109858441A (en) A kind of monitoring abnormal state method and apparatus for construction site
CN103559205A (en) Parallel feature selection method based on MapReduce
CN114818828B (en) Training method of radar interference perception model and radar interference signal identification method
CN111984544B (en) Device performance test method and device, electronic device and storage medium
CN108304322B (en) Pressure testing method and terminal equipment
CN113726783A (en) Abnormal IP address identification method and device, electronic equipment and readable storage medium
CN109213965B (en) System capacity prediction method, computer readable storage medium and terminal device
CN115272797A (en) Training method, using method, device, equipment and storage medium of classifier
CN108363740B (en) IP address analysis method and device, storage medium and terminal
CN115273191A (en) Face document gathering method, face recognition method, device, equipment and medium
US20200159866A1 (en) Perceived Web Page Loading Time
CN113296992A (en) Method, device, equipment and storage medium for determining abnormal reason
CN111368128B (en) Target picture identification method, device and computer readable storage medium
CN116939661A (en) SIM card abnormality detection method and system, electronic equipment and storage medium
CN113780666B (en) Missing value prediction method and device and readable storage medium
WO2018188352A1 (en) Method, device, and apparatus for determining resource balance, and storage medium
CN114519520A (en) Model evaluation method, model evaluation device and storage medium
CN114155578A (en) Portrait clustering method, device, electronic equipment and storage medium
CN114697127A (en) Service session risk processing method based on cloud computing and server
EP4349055A2 (en) Dimensioning of telecommunication infrastructure
CN111027599B (en) Clustering visualization method and device based on random sampling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant