CN112668889A - Method, device and storage medium for detecting risk user - Google Patents

Method, device and storage medium for detecting risk user Download PDF

Info

Publication number
CN112668889A
CN112668889A CN202011608747.7A CN202011608747A CN112668889A CN 112668889 A CN112668889 A CN 112668889A CN 202011608747 A CN202011608747 A CN 202011608747A CN 112668889 A CN112668889 A CN 112668889A
Authority
CN
China
Prior art keywords
user
target
target user
risk
social software
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011608747.7A
Other languages
Chinese (zh)
Inventor
张莎妮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhangmen Science and Technology Co Ltd
Original Assignee
Shanghai Zhangmen Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhangmen Science and Technology Co Ltd filed Critical Shanghai Zhangmen Science and Technology Co Ltd
Priority to CN202011608747.7A priority Critical patent/CN112668889A/en
Publication of CN112668889A publication Critical patent/CN112668889A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application provides a method, equipment and storage medium for detecting a risk user. A risk user detection method is applied to a server and comprises the following steps: when a target user is detected to be registered into target social software, establishing a specified relation with the target user through a virtual user; obtaining the conversation content of the conversation between the virtual user and the target user through the target social software; and carrying out risk detection on the target user according to the session content to determine whether the target user is a risk user. So as to improve the timeliness and efficiency of the detection of the risk users.

Description

Method, device and storage medium for detecting risk user
Technical Field
The present application relates to the field of information technology, and in particular, to a method, an apparatus, and a storage medium for detecting a risky user.
Background
The social software is software for realizing the purpose of communication through a network, breaks through the limitation of physical space through the network, and draws the distance between people.
For social software, it is important how to quickly and accurately identify risky users. If the risk users cannot be identified in time, the risk users are bound to influence the normal users, and if the risk users possibly send harassment information to the normal users, the risk users can be caused to influence the normal users.
At present, a common method for detecting the risky users mainly identifies the risky users by combining device fingerprints with long-term behaviors, so that the timeliness is poor, and the use experience of normal users is easily influenced.
Disclosure of Invention
The application provides a method, equipment and a storage medium for detecting a risk user, so as to realize efficient risk user detection.
The first aspect of the present application provides a method for detecting a risk user, which is applied to a server, and the method includes:
when a target user is detected to be registered into target social software, establishing a specified relation with the target user through a virtual user; wherein, two users with the specified relationship can have a conversation with each other through the target social software;
obtaining the conversation content of the conversation between the virtual user and the target user through the target social software;
and carrying out risk detection on the target user according to the session content to determine whether the target user is a risk user.
A second aspect of the present application provides a server, wherein the server includes:
one or more processors;
a machine-readable storage medium to store one or more computer-readable instructions,
when executed by the one or more processors, cause the one or more processors to implement a method as described in the first aspect of the application.
A third aspect of the present application provides a machine-readable storage medium, on which a program is stored, which when executed by a processor, implements the method for detecting an at-risk user as described in the first aspect above.
The embodiment of the application has the following beneficial effects:
in the embodiment of the application, when the server detects that the target user is registered to enter the target social software, the server establishes the designated relationship with the target user through the virtual user, acquires the conversation content of conversation between the virtual user and the target user through the target social software, and then carries out risk detection on the target user according to the acquired conversation content to determine whether the target user is a risk user, so that the risk user can be ensured to be detected in time, the risk user is prevented from influencing the use experience of normal users, the timeliness and the efficiency of detection of the risk user are improved, and the use experience of the normal users is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and those skilled in the art can also obtain other drawings according to the drawings without inventive labor.
Fig. 1 is a schematic flowchart of a risk user detection method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating a process of establishing a designated relationship between a virtual user and a target user according to an embodiment of the present application;
FIG. 3 is a schematic flowchart illustrating a process of obtaining session content of a session between a virtual user and a target user through target social software according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a process of determining whether a target user is a risky user according to an embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating risk user detection according to an embodiment of the present application;
fig. 6 is a block diagram of a server according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one type of device from another. For example, a first device may also be referred to as a second device, and similarly, a second device may also be referred to as a first device, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The message pushing method according to the embodiment of the present application is described in more detail below, but should not be limited thereto.
Referring to fig. 1, a method for detecting a risky user according to a first aspect of the present application includes the following steps:
it should be noted that an execution subject of the risk user detection method provided in the first aspect of the present application may be a server, such as a server of social software APP.
S100: when the target user is detected to be registered into the target social software, establishing a specified relation with the target user through the virtual user; wherein, two users with specified relationship can have conversation through the target social software.
S200: and acquiring conversation content of the conversation between the virtual user and the target user through the target social software.
S300: and carrying out risk detection on the target user according to the session content so as to determine whether the target user is a risk user.
In the embodiment of the present application, the target social software does not refer to a fixed social software, but may refer to any social software that allows risk user detection by using the scheme provided in the embodiment of the present application.
The target user may refer to any user registered to the target social software.
In the embodiment of the application, the risk detection for the user can be effectively realized by acquiring the session content of the user in consideration of the obvious characteristics of the risk user of the social software on the session content, such as political, yellow or storm related, or personal property, illegal link and the like.
However, considering that the session content of the user belongs to the personal privacy of the user, the server usually cannot directly obtain the session content of the user, so that risk detection cannot be performed on the user according to the session content of the user.
Therefore, in order to implement risk detection on the user according to the session content of the user, the server may create a virtual user (also referred to as a small robot), perform a session with a normal user of the social software through the virtual user, so as to obtain the session content of the user, and perform risk detection on the user according to the obtained session content.
Based on the method, when the server detects that the target user is registered into the target social software, the server can establish a specified relationship with the target user through the virtual user, so that conversation can be conducted between the virtual user and the target user through the target social software.
Illustratively, the specified relationship may be a "friend" relationship, or a "mutual interest" relationship, or the like.
For example, the virtual user for establishing the specified relationship with the target user may be established in advance, or may be established when the target user is detected to register into the target social software.
Illustratively, one virtual user is used to establish the specified relationship with one user, or one virtual user is used to establish the specified relationship with a plurality of users.
When the server establishes the designated relationship with the target user through the virtual user, the server can control the virtual user and the target user to carry out conversation through the target social software, obtain the conversation content of the virtual user and the target user through the target social software, and carry out risk detection on the target user according to the obtained conversation content so as to determine whether the target user is a risk user.
In the method flow shown in fig. 1, the server establishes a virtual user, establishes an appointed relationship with a target user registered in the target social software through the virtual user, acquires session content of the virtual user and the target user through the target social software, and then performs risk detection on the target user according to the acquired session content to determine whether the target user is a risk user, so that it is ensured that the risk user is detected in time, the influence of the risk user on the use experience of a normal user is avoided, the timeliness and efficiency of the detection of the risk user are improved, and the use experience of the normal user is improved.
In some embodiments, referring to fig. 2, in step S100, by establishing a specified relationship between the virtual user and the target user, the following steps may be implemented:
and S101, creating a virtual user.
S102, controlling the virtual user to send a specified relation establishment request message to the target user through the target social software, wherein the specified relation establishment request message is used for requesting establishment of a specified relation between the virtual user and the target user.
S103, when receiving a specified relation establishment confirmation message responded by the target user through the target social software, establishing a specified relation between the virtual user and the target user.
For example, when the server detects that the target user is registered into the target social software, the virtual user may be created, and the virtual user is controlled to send a specified relationship establishment request message to the target user through the target social software to request establishment of the specified relationship between the virtual user and the target user.
When receiving the confirmation message of establishing the designated relationship responded by the target user through the target social software, the designated relationship between the virtual user and the target user can be established, and further, the virtual user can carry out conversation with the target user through the target social software.
It should be noted that, in consideration of that the risk user usually actively tries to establish the specified relationship with other users so as to send harassment information to other users, the server may also recommend a virtual user to the target user, and the target user selects to actively establish the specified relationship with the virtual user, and further, when receiving a specified relationship establishment request message sent by the target user to the target user through the target social software, the server may return a specified relationship establishment confirmation message to the target user through the target social software, so as to establish the specified relationship between the virtual user and the target user.
In some embodiments, referring to fig. 3, in step S200, obtaining session content of a session between a virtual user and a target user through target social software may be implemented by the following steps:
s201, controlling a virtual user to send a preset message to a target user through target social software;
s202, receiving a message replied to the virtual user by the target user through the target social software.
For example, when the virtual user and the target user establish the specified relationship, the server may control the virtual user to send a preset message, for example, "hello" and "where you are", to the target user through the target social software, and receive a message replied by the target user to the virtual user through the target social software, so as to obtain the session content of the virtual user and the target user through the target social software.
It should be noted that, after the virtual user and the target user establish a specified relationship, the session content between the virtual user and the target user, which is acquired by the server through the target social software, may also be a message that is actively sent to the virtual user by the target user through the target social software, in consideration that the risk user usually actively sends harassment information to other users.
In some embodiments, referring to fig. 4, in step S300, performing risk detection on the target user according to the session content to determine whether the target user is a risk user may be implemented by:
s301, sensitive vocabulary detection is carried out on messages of a preset number, sent to a virtual user by a target user through target social software;
s302, when sensitive words are detected, determining that the target user is a risk user;
and S303, when the sensitive words are not detected, determining that the target user is a non-risk user.
For example, the server may determine whether the target user is a risk user by performing sensitive vocabulary detection on the acquired message sent by the target user to the virtual user through the target social software.
For example, when a sensitive vocabulary, such as a vocabulary related to yellow, storm, politics, or personal property, is detected in a preset number of messages sent by the target user to the virtual user through the target social software, the target user is determined to be a risky user; otherwise, determining the target user as a non-risk user.
It is considered that the risky users may also be adulterated with normal messages when sending messages to other users through the target social software. For example, a risky user may first communicate with other users via the target social software normally and then suddenly send a harassment message via the target social software. Therefore, in order to improve the accuracy of detecting the risk users, the server may obtain a plurality of (i.e., the preset number is greater than 1) messages sent to the virtual user by the target user through the target social software, and perform sensitive vocabulary detection on the plurality of messages to determine whether the target user is a risk user.
In one example, after determining that the target user is the risky user in step S303, the method may further include:
and determining the risk level of the target user according to the type of the detected sensitive words or/and the proportion of the sensitive words.
For example, considering that hazards caused when different types of risk users send harassment information to other users through target social software may also be different, when it is determined that a target user is a risk user, the risk level of the target user may be further determined, so that different processing measures may be taken for the target user in a subsequent process according to different risk levels of the target user.
For example, the risk level of the target user may be determined depending on the type of sensitive vocabulary detected.
For example, when the detected sensitive vocabulary is an administrative (e.g., anti-movement) vocabulary, a virus-involved vocabulary, or an exposure vocabulary, the risk level of the target user may be determined to be high.
When the detected sensitive words are words related to yellow or gambling, the risk level of the target user can be determined to be a low level.
As another example, a risk level of the target user may be determined based on a proportion of sensitive words detected.
For example, the proportion of the sensitive vocabulary may be a ratio of the number of messages containing the sensitive vocabulary to a preset number; or the proportion of the sensitive vocabulary may be the proportion of the number of the sensitive vocabulary to the total vocabulary of the acquired messages of the preset number sent to the virtual user by the target user through the target social software.
For example, when the proportion of the detected sensitive words is higher than a preset proportion threshold value, determining the risk level of the target user as a high level; and when the detected proportion of the sensitive words is lower than or equal to a preset proportion threshold value, determining the risk level of the target user to be a low level.
Also for example, the risk level of the target user may be determined according to the type of the sensitive vocabulary and the proportion of the sensitive vocabulary.
For example, when a sensitive vocabulary is detected as a specific type of sensitive vocabulary, such as an administrative (e.g., anti-movement) vocabulary, a virus-involved vocabulary, or an exposure vocabulary, the risk level of the target user may be determined to be high.
When the sensitive vocabulary is detected to be the non-specified type of sensitive vocabulary, the proportion of the sensitive vocabulary can be further counted, and when the proportion of the sensitive vocabulary is higher than a preset proportion threshold value, the risk level of the target user is determined to be a high level; and when the detected proportion of the sensitive words is lower than or equal to a preset proportion threshold value, determining the risk level of the target user to be a low level.
It should be noted that, in the embodiment of the present application, the risk user level is not limited to the high level and the low level, and may also include other levels, for example, the risk level may include a high level, a medium level, and a low level; alternatively, the risk levels may include a first risk level, a second risk level, a third risk level, … (with levels sequentially increasing or decreasing); accordingly, when the risk level of the target user is determined according to the type of the detected sensitive vocabulary or/and the proportion of the sensitive vocabulary, the adopted strategy can be correspondingly adjusted, and the embodiment of the application is not described herein again.
In other embodiments, in step S300, performing risk detection on the target user according to the session content to determine whether the target user is a risk user may include:
and detecting a preset number of messages sent to the virtual user by the target user through the target social software by using a pre-trained machine learning model, and determining whether the target user is a risk user according to a detection result.
For example, in order to realize the identification of the risky user, a machine learning model, such as a semantic model, may be trained in advance, and a preset number of messages sent by the target user to the virtual user through the target social software are detected by using the trained machine learning model, and it is determined whether the target user is the risky user according to a detection result.
In one example, the detection result set by the pre-trained machine learning model may include a risk level for indicating that the target user is a non-risk user, and at least one risk level for indicating that the target user is a risk user.
For example, the detection result set by the pre-trained machine learning model may include a risk level indicating that the target user is a non-risk user and at least one risk level indicating that the target user is a risk user, and the pre-trained machine learning model may determine one of the risk levels as the risk level of the target user according to detection of a preset number of messages sent by the target user to the virtual user through the target social software.
For example, the output of the pre-trained machine learning model may include 0 or 1, 0 indicating that the target user is a non-risky user; a 1 indicates that the user is a risky user.
Alternatively, the output of the pre-trained machine learning model may include 0, 1, or 2, with 0 indicating that the target user is a non-risky user; 1 indicates that the user is a risk user and the risk level is a low level; 2 indicates that the user is a risky user and the risk level is high.
It should be noted that, in the embodiment of the present application, whether the target user is a risk user may also be determined in a manual review manner, for example, the server may output the acquired session content of the target user in a risk review interface, so that a reviewer performs manual review on the session content of the target user to determine whether the target user is a risk user, and input instruction information indicating whether the target user is a risk user to the server.
For example, when the auditor determines that the target user is a risk user, the functional button "yes" of the risk audit interface may be clicked to indicate that the target user at the server is a risk user; when the auditor determines that the target user is a non-risk user, the functional button "no" of the risk audit interface can be clicked, so that the target user at the server side is a non-risk user.
Illustratively, the risk review interface may further include a plurality of options of different risk levels, and when the auditor determines that the target user is a risk user, the target user is selected according to a specific review result, so that the server can determine the risk level of the target user.
In some embodiments, after the risk detection is performed on the target user according to the session content in step S300, the method may further include:
and when the target user is determined to be a risk user, executing corresponding processing operation on the target user according to the risk level of the target user.
Illustratively, when the server determines that the target user is a risk user according to the acquired session content, corresponding processing operations may be performed on the target user according to the risk level of the target user.
For target users with different risk levels, the processing operations taken by the server for the target users can be different, and the processing operations taken by the server for the target users with higher risk levels can make the target users have more restrictions when using the target social software.
In one example, performing a corresponding processing operation on the target user according to the risk level of the target user may include:
when the risk level of the target user is a first risk level, executing a first type of processing operation on the target user;
when the risk level of the target user is a second risk level, executing a second type of processing operation on the target user;
the risk degree of the user with the first risk level is higher than that of the user with the second risk level, and the limitation of the first type of processing operation on the target user to use the target social software is higher than that of the second type of processing operation on the target user to use the target social software.
Illustratively, the risk level of the risk user includes a first risk level and a second risk level.
Wherein the risk level of the user of the first risk level is higher than the risk level of the user of the second risk level, e.g. the first risk level is a high level and the second risk level is a low level.
When the server determines that the risk level of the target user is a first risk level, a first type of processing operation can be executed on the target user;
when the server determines that the risk level of the target user is the second risk level, the second type of processing operation may be performed on the target user.
Wherein the first type of processing operation has a higher limit on the target user's use of the target social software than the second type of processing operation.
For example, a first type of processing operation may be a blackout (in a blackout state, a user cannot use social functionality of the target social software, such as "add friends," "join a group," or "nearby people," etc.); the second type of processing operation may be to restrict the target user from using a designated function of the target social software, or/and to set tags visible to other users for the target user. For example, the user is prohibited from using the function of "nearby people", or/and a "risky user" tag is set for the target user, so that other users can know that the target user is a risky user, and the target user can be kept alert when friend addition is performed or other social functions are used.
In order to enable those skilled in the art to better understand the technical solutions provided by the embodiments of the present application, the technical solutions provided by the embodiments of the present application are described below with reference to specific examples.
Referring to fig. 5, a schematic view of a process of detecting a risk user according to an embodiment of the present application is shown in fig. 5, where an implementation process of the risk user detection is as follows:
1. the a-user (i.e., the target user described above) registers into the target social software.
2. And the server side sends a friend application (namely the specified relation establishment request message) to the A user through the target social software by using the small robot (namely the virtual user).
Illustratively, when the server sends a friend application to the user a through the target social software by using the small robot, the target social software client (the client of the target social software) of the user a displays the friend application, and when the user a clicks a corresponding "agree" function button in a friend application interface of the target social software client, an agree instruction (i.e., the above-mentioned specified relationship establishment confirmation message) is sent to the small robot through the target social software.
3. And the server receives an agreement instruction responded by the user A through the target social software, and establishes a friend relationship between the small robot and the user A.
4. And the server controls the small robot to send a message a to the user A through the target social software.
For example, the server may control the bot to send "hello" to the a-user through the target social software.
And when the user A receives the message a through the target social software client, the user A can reply the message b through the target social software client.
For example, the a-user may reply with a target social software client "where do you are? "
5. The server side controls the small robot to send a plurality of messages to the user A through the target social software until the number of the messages replied by the user A through the target social software reaches n (namely the preset number is n which is more than or equal to 2).
6. And the server side carries out risk detection on the user A according to the n acquired messages so as to determine whether the user A is a risk user.
Illustratively, the server may determine whether the n messages have problems through a machine learning method, and further determine whether the a user is a risky user.
For example, if there is a problem with the n messages, it is determined that the a user is at a higher risk level (i.e., the a user is a risky user); if there is no problem with the n messages, it is determined that the a user is at a lower risk level (i.e., the a user is a non-risk user).
The present application also provides a server, wherein the server includes:
one or more processors;
a machine-readable storage medium to store one or more computer-readable instructions,
when executed by the one or more processors, cause the one or more processors to implement a method as described in embodiments of the first aspect of the application.
The embodiment of the risk user detection method can be applied to a server. Taking software implementation as an example, a processor of a server where the software implementation is located reads corresponding computer program instructions in a nonvolatile memory into a memory to run, so as to implement the embodiment of the risk user detection method of the present application. From a hardware level, as shown in fig. 6, fig. 6 is a hardware structure diagram of a server according to an exemplary embodiment shown in this application, except for the processor 610, the memory 630, the interface 620, and the nonvolatile memory 640 shown in fig. 6, the server may also include other hardware according to the actual functions of the server, which is not described again.
The present application also provides a machine-readable storage medium on which a program is stored, which when executed by a processor, implements the method of risk user detection as described in any of the preceding embodiments.
This application may take the form of a computer program product embodied on one or more storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having program code embodied therein. Machine-readable storage media include both permanent and non-permanent, removable and non-removable media, and the storage of information may be accomplished by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of machine-readable storage media include, but are not limited to: phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by a computing device.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (13)

1. A method for detecting a risk user is applied to a server side, and is characterized in that the method comprises the following steps:
when a target user is detected to be registered into target social software, establishing a specified relation with the target user through a virtual user; wherein, two users with the specified relationship can have a conversation with the target social software;
obtaining the conversation content of the conversation between the virtual user and the target user through the target social software;
and carrying out risk detection on the target user according to the session content to determine whether the target user is a risk user.
2. The method of claim 1, wherein establishing the specified relationship with the target user via the virtual user comprises:
creating a virtual user;
controlling the virtual user to send a specified relationship establishment request message to the target user through the target social software; wherein the specified relationship establishment request message is used for requesting establishment of the specified relationship between the virtual user and the target user;
and when receiving a specified relation establishment confirmation message responded by the target user through the target social software, establishing the specified relation between the virtual user and the target user.
3. The method of claim 1, wherein the obtaining of the session content of the virtual user and the target user in the session through the target social software comprises:
controlling the virtual user to send a preset message to the target user through the target social software;
and receiving a message replied to the virtual user by the target user through the target social software.
4. The method of claim 1, wherein the risk detecting the target user according to the session content to determine whether the target user is a risk user comprises:
sensitive vocabulary detection is carried out on messages of a preset number, sent to the virtual user by the target user through the target social software;
when sensitive words are detected, determining that the target user is a risk user;
when the sensitive vocabulary is not detected, determining that the target user is a non-risk user.
5. The method of claim 4, wherein after determining that the target user is an at-risk user, further comprising:
and determining the risk level of the target user according to the type of the detected sensitive words or/and the proportion of the sensitive words.
6. The method of claim 1, wherein the risk detecting the target user according to the session content to determine whether the target user is a risk user comprises:
and detecting a preset number of messages sent to the virtual user by the target user through the target social software by using a pre-trained machine learning model, and determining whether the target user is a risk user according to a detection result.
7. The method of claim 6, wherein the pre-trained machine learning model set detection results comprise a risk level indicating that the target user is a non-risky user and at least one risk level indicating that the target user is a risky user.
8. The method according to claim 5 or 7, wherein after the risk detection of the target user according to the session content, the method further comprises:
and when the target user is determined to be a risk user, executing corresponding processing operation on the target user according to the risk level of the target user.
9. The method according to claim 8, wherein the performing the corresponding processing operation on the target user according to the risk level of the target user comprises:
when the risk level of the target user is a first risk level, executing a first type processing operation on the target user;
when the risk level of the target user is a second risk level, executing a second type of processing operation on the target user;
wherein the risk level of the user of the first risk level is higher than the risk level of the user of the second risk level, and the first type of processing operation has a higher restriction on the target user to use the target social software than the second type of processing operation has on the target user to use the target social software.
10. The method of claim 9, wherein the first type of processing operation comprises disabling full functionality of the target social software; the second type of processing operation includes disabling a portion of functionality of the target social software.
11. The method according to any one of claims 1-7, wherein a virtual user is used to establish said specified relationship with a user;
or the like, or, alternatively,
one virtual user is used for establishing the specified relationship with a plurality of users.
12. A server, wherein the server comprises:
one or more processors;
a machine-readable storage medium to store one or more computer-readable instructions,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-11.
13. A machine readable storage medium having stored thereon a program which, when executed by a processor, implements the message pushing method according to any one of claims 1-11.
CN202011608747.7A 2020-12-30 2020-12-30 Method, device and storage medium for detecting risk user Pending CN112668889A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011608747.7A CN112668889A (en) 2020-12-30 2020-12-30 Method, device and storage medium for detecting risk user

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011608747.7A CN112668889A (en) 2020-12-30 2020-12-30 Method, device and storage medium for detecting risk user

Publications (1)

Publication Number Publication Date
CN112668889A true CN112668889A (en) 2021-04-16

Family

ID=75410940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011608747.7A Pending CN112668889A (en) 2020-12-30 2020-12-30 Method, device and storage medium for detecting risk user

Country Status (1)

Country Link
CN (1) CN112668889A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938455A (en) * 2021-10-13 2022-01-14 平安银行股份有限公司 User monitoring method and device of group chat system, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107612812A (en) * 2017-08-28 2018-01-19 珠海凡泰极客科技有限责任公司 A kind of security account-opening method and system based on chat robots
CN108400928A (en) * 2018-01-25 2018-08-14 链家网(北京)科技有限公司 A kind of instant messaging abnormal user processing method and processing device
CN109213857A (en) * 2018-08-29 2019-01-15 阿里巴巴集团控股有限公司 A kind of fraud recognition methods and device
CN110162620A (en) * 2019-01-10 2019-08-23 腾讯科技(深圳)有限公司 Black detection method, device, server and the storage medium for producing advertisement
CN110933113A (en) * 2019-12-30 2020-03-27 腾讯科技(深圳)有限公司 Block chain-based interactive behavior detection method, device, equipment and storage medium
CN111277488A (en) * 2020-01-19 2020-06-12 上海掌门科技有限公司 Session processing method and device
CN111556059A (en) * 2020-04-29 2020-08-18 深圳壹账通智能科技有限公司 Abnormity detection method, abnormity detection device and terminal equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107612812A (en) * 2017-08-28 2018-01-19 珠海凡泰极客科技有限责任公司 A kind of security account-opening method and system based on chat robots
CN108400928A (en) * 2018-01-25 2018-08-14 链家网(北京)科技有限公司 A kind of instant messaging abnormal user processing method and processing device
CN109213857A (en) * 2018-08-29 2019-01-15 阿里巴巴集团控股有限公司 A kind of fraud recognition methods and device
CN110162620A (en) * 2019-01-10 2019-08-23 腾讯科技(深圳)有限公司 Black detection method, device, server and the storage medium for producing advertisement
CN110933113A (en) * 2019-12-30 2020-03-27 腾讯科技(深圳)有限公司 Block chain-based interactive behavior detection method, device, equipment and storage medium
CN111277488A (en) * 2020-01-19 2020-06-12 上海掌门科技有限公司 Session processing method and device
CN111556059A (en) * 2020-04-29 2020-08-18 深圳壹账通智能科技有限公司 Abnormity detection method, abnormity detection device and terminal equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113938455A (en) * 2021-10-13 2022-01-14 平安银行股份有限公司 User monitoring method and device of group chat system, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Squicciarini et al. Collective privacy management in social networks
JP6828204B2 (en) Servers, programs and information processing methods
CA2808720C (en) Dynamic place visibility in geo-social networking system
EP2691848B1 (en) Determining machine behavior
US10929511B2 (en) Systems and methods for protecting sensitive information
TWI575476B (en) Sharing user information with proximate devices
WO2017054504A1 (en) Identity authentication method and device, and storage medium
US20150161519A1 (en) Name recognition
US10650133B2 (en) Systems and methods for providing image-based security measures
CN109063966A (en) The recognition methods of adventure account and device
CN109040329B (en) Method for determining contact person label, terminal device and medium
CN109213857A (en) A kind of fraud recognition methods and device
CN109857943B (en) Permission level determination method and device, computer equipment and readable storage medium
CN107294974A (en) The method and apparatus for recognizing target clique
CN107862020B (en) Friend recommendation method and device
US11521231B2 (en) Fraud prevention in programmatic advertising
US10951668B1 (en) Location based community
CN108076012A (en) Abnormal login determination methods and device
CN110781153B (en) Cross-application information sharing method and system based on block chain
CN109670931A (en) Behavioral value method, apparatus, equipment and the storage medium of loan user
CN112668889A (en) Method, device and storage medium for detecting risk user
CN113158192B (en) Batch construction and management method and system for anti-detection online social network virtual users
US20170186009A1 (en) Systems and methods to identify illegitimate online accounts
CN111869180A (en) Blocking unauthorized account access based on location and time
CN110955842A (en) Abnormal access behavior identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination