CN111027094B - Risk assessment method and device for private data leakage - Google Patents

Risk assessment method and device for private data leakage Download PDF

Info

Publication number
CN111027094B
CN111027094B CN201911226781.5A CN201911226781A CN111027094B CN 111027094 B CN111027094 B CN 111027094B CN 201911226781 A CN201911226781 A CN 201911226781A CN 111027094 B CN111027094 B CN 111027094B
Authority
CN
China
Prior art keywords
privacy
data
requester
comparison
api
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911226781.5A
Other languages
Chinese (zh)
Other versions
CN111027094A (en
Inventor
邓圆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN201911226781.5A priority Critical patent/CN111027094B/en
Publication of CN111027094A publication Critical patent/CN111027094A/en
Application granted granted Critical
Publication of CN111027094B publication Critical patent/CN111027094B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes

Abstract

An embodiment of the present specification provides a risk assessment method for private data leakage, including: firstly, acquiring a request message for calling an Application Program Interface (API) sent by a requester to a service platform and a response message returned by the service platform aiming at the request message, wherein the request message is used for requesting privacy data of a target object; then, analyzing the request message and the response message to obtain analyzed data, wherein the analyzed data at least comprises a plurality of target APIs, input parameters aiming at the target APIs and a plurality of privacy classes of the privacy data output by the target APIs; then, acquiring authority data for calling API by a requester from a service platform, wherein the authority data comprises an API set which the requester has the right to call, a parameter set composed of parameters which are transmitted by aiming at the API set and a privacy class set of output data corresponding to the parameter set; and then, evaluating the data leakage risk of the API call at least based on the analysis data and the authority data.

Description

Risk assessment method and device for private data leakage
Technical Field
One or more embodiments of the present disclosure relate to the technical field of data information security, and in particular, to a risk assessment method and apparatus for private data leakage.
Background
An API (Application Programming Interface) has the advantages of convenience in calling, strong versatility and the like, and has gradually become a main providing mode of internet network services at present. Therefore, API calls also become a major area of concern for preventing data leakage.
The service platform typically stores basic information data of objects (such as individuals or businesses) served by the service platform, service data generated in the process of using the service, and the like. It will be appreciated that these data are private data to the service object. In the case that the service object authorizes part of the private data, the service platform may provide an API call service to a data demanding party (such as a research institution or a merchant) based on the part of the private data. Usually, a data demander (or requester) can only call the API that it has call authority. However, with the increase of the data volume of the API, the customization of the personalized service of the requesting party and the development management vulnerability which is difficult to avoid, the data content actually output by the API still differs from the declared content of the requesting party, so that the risk of data leakage exists in the API calling process. For example, there may be an illegal call to an API that the requester has no authority to call, and the privacy information of the user may be stolen, which may result in leakage of the privacy of the user.
Therefore, a reasonable scheme is urgently needed, and the risk of private data leakage caused by API calling can be timely and accurately evaluated so as to effectively prevent private data leakage.
Disclosure of Invention
One or more embodiments of the present specification describe a risk assessment method and apparatus for private data leakage, which can assess the risk of private data leakage caused by API call in a timely and accurate manner, so as to effectively prevent private data leakage.
According to a first aspect, there is provided a risk assessment method for private data leakage, the method comprising: acquiring a request message for calling an Application Program Interface (API) sent by a requester to a service platform and a response message returned by the service platform aiming at the request message, wherein the request message is used for requesting privacy data of a target object; analyzing the request message and the response message to obtain analyzed data, wherein the analyzed data at least comprises a plurality of target APIs, input parameters aiming at the target APIs, and a plurality of privacy categories of the privacy data output through the target APIs; acquiring authority data for calling API by the requester from the service platform, wherein the authority data comprises an API set which the requester has the right to call, a parameter set formed by parameters which are transmitted by the API set in a right way, and a privacy class set of output data corresponding to the parameter set; and evaluating the data leakage risk of the API call at least based on the analysis data and the authority data.
In one embodiment, the parsing the request message and the response message to obtain parsed data includes: analyzing the input parameters included in the request message and putting the input parameters into the analysis data; and analyzing the privacy data included in the response message, determining the privacy classes corresponding to the privacy data, and classifying the privacy classes into the analyzed data.
In a specific embodiment, the private data includes an arbitrary first field, and the first field corresponds to a first field name, a first field size, and a first field type; determining a plurality of privacy categories corresponding to the privacy data includes: determining a first category corresponding to the first field name based on a preset mapping relation between the field name and the privacy category, and classifying the first category into the privacy categories; or, determining a second category corresponding to the first field size based on a preset mapping relation between the field size and the privacy categories, and classifying the second category into the privacy categories; or, determining a third category corresponding to a combination comprising the first field size and the first field type based on a preset mapping relation between the combination comprising the field size and the field type and the privacy categories, and classifying the third category into the privacy categories.
In another specific embodiment, the privacy data includes a plurality of fields, wherein determining the privacy categories to which the privacy data corresponds includes: determining that a plurality of fields in the plurality of fields correspond to a plurality of fourth categories in the plurality of privacy categories based on a checking algorithm preset for the plurality of privacy categories, and classifying the fields into the plurality of privacy categories; and/or determining that a plurality of fields in the plurality of fields match a plurality of regular items in the plurality of regular items based on a plurality of preset regular items, and determining a plurality of fifth categories corresponding to the regular items based on the mapping relation between the regular items and the privacy categories, and classifying the fifth categories into the privacy categories.
In one embodiment, wherein the assessing the risk of data leakage for the API call based on at least the parsed data and the permission data comprises: and inputting the analysis data and the authority data into a pre-trained first risk assessment model together to obtain a first prediction result, and indicating the data leakage risk.
In one embodiment, wherein the assessing the risk of data leakage for the API call based on at least the parsed data and the permission data comprises: comparing the analytic data with the authority data to obtain a comparison result; and evaluating the data leakage risk at least based on the comparison result.
In a specific embodiment, comparing the parsed data with the authority data to obtain a comparison result includes: judging whether the target APIs belong to the API set or not, obtaining API comparison pair sub-results aiming at API comparison items, and classifying the API comparison pair sub-results into the comparison results; judging whether the input parameters of the target APIs belong to the parameter set or not, obtaining a parameter comparison pair result aiming at a parameter comparison item, and classifying the result into the comparison result; and judging whether the privacy classes belong to the privacy class set or not, obtaining class comparison pair sub-results aiming at the class comparison items, and classifying the class comparison pair sub-results into the comparison results.
Further, in a more specific embodiment, determining whether the privacy classes belong to the privacy class set, and obtaining a class comparison pair sub-result for a class comparison item includes: under the condition that the privacy classes outside the set which do not belong to the privacy class set are judged to exist in the plurality of privacy classes, acquiring a preset mapping relation between the privacy classes and privacy sensitivity; and determining the privacy sensitivity corresponding to the privacy classes outside the set based on the mapping relation, and classifying the privacy sensitivity into the class comparison pair result.
In another specific embodiment, the parsing data further includes a requester ID of the requester and a target object ID of the target object. After parsing the request message and the response message to obtain parsed data, the method further includes: acquiring corresponding attribute information of the requester from the service platform based on the ID of the requester; acquiring corresponding object attribute information from the service platform based on the target object ID; and determining the matching degree between the attribute information of the requester and the attribute information of the object. Wherein obtaining an assessment score for the risk of data leakage based at least on the comparison comprises: and evaluating the data leakage risk based on the comparison result and the matching degree.
In a more specific embodiment, determining a degree of match between the requestor attribute information and the object attribute information includes: inputting the attribute information of the requester and the attribute information of the object into a pre-trained matching degree prediction model to obtain the matching degree between the requester and the object; or calculating the matching degree between the attribute information of the requester and the attribute information of the object based on a matching degree algorithm.
In another more specific embodiment, the requester attribute information includes one or more of: the scale of the requester, the industry of the requester, the region where the requester is registered, the region where the requester entity is located, the application type of the requester and the vulnerability scanning condition of the requester; and/or the target object comprises a target user, the target object ID comprises a user ID of the target user, the object attribute data comprises user personal information, and the user personal information comprises one or more of the following: age, gender, occupation, hobbies, frequent location, service usage preferences, service usage records.
In yet another more specific embodiment, wherein the assessing the risk of data leakage based on the comparison result and the degree of matching comprises: and weighting the API comparison pair sub-result, the parameter comparison pair sub-result, the category comparison pair sub-result and the matching degree based on weights pre-distributed for the API comparison item, the parameter comparison item, the category comparison item and the matching degree to obtain an evaluation score for the data leakage risk.
In yet another more specific embodiment, wherein the assessing the risk of data leakage based on the comparison result and the matching degree comprises: and inputting the comparison result and the matching degree into a pre-trained second risk assessment model together to obtain a second prediction result, and indicating the data leakage risk.
According to a second aspect, there is provided a risk assessment apparatus for private data leakage, the apparatus comprising: the information acquisition unit is configured to acquire a request message for calling an Application Program Interface (API) sent to a service platform by a request party and a response message returned by the service platform aiming at the request message, wherein the request message is used for requesting the privacy data of a target object; the analysis unit is configured to analyze the request message and the response message to obtain analysis data, wherein the analysis data at least comprises a plurality of target APIs, input parameters aiming at the target APIs, and privacy categories of the privacy data output through the target APIs; the authority acquiring unit is configured to acquire authority data for calling an API (application programming interface) from the service platform, wherein the authority data comprises an API set which the requester has the right to call, a parameter set formed by parameters which are transmitted for the API set and a privacy class set of output data corresponding to the parameter set; and the evaluation unit is configured to evaluate the data leakage risk of the API call at this time at least based on the analysis data and the authority data.
According to a third aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of the first aspect.
According to a fourth aspect, there is provided a computing device comprising a memory having stored therein executable code and a processor that, when executing the executable code, implements the method of the first aspect.
In summary, by using the risk assessment method and apparatus for private data leakage provided by the embodiments of the present specification, the private data leakage can be accurately assessed in time by scanning the network traffic data called by the API in real time, analyzing the network traffic data to obtain analyzed data, and calling the authority data of the API based on the analyzed data and the obtained requester. In addition, the attribute information of the requester and the attribute information of the target object for which the API calls are directed can be obtained for risk assessment, so that the reliability and the availability of the assessment result are further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating an implementation scenario of a risk assessment method according to an embodiment;
FIG. 2 illustrates a flow diagram of a risk assessment method for private data leakage, according to one embodiment;
FIG. 3 shows a block diagram of a risk assessment device for private data leakage according to one embodiment.
Detailed Description
The scheme provided by the specification is described below with reference to the accompanying drawings.
As mentioned above, at present, there is a risk of data leakage in the API calling process, and specifically, for reasons such as a large number of APIs and difficulty in avoiding API development management vulnerabilities, the data content actually output by the API may be different from the data actually requested by the requestor or the data that the requestor has a usage right.
For example, for an API that a certain requestor does not have the right to call, because of reasons such as omission of API authority management, the API is illegally called by the certain requestor and personal sensitive information of the user is output, thereby causing leakage of user privacy.
For another example, a certain requester has the right to call a certain API, but the subscription data of the requester and the service platform only includes partial data content (e.g. user gender) in the total data (e.g. user gender, user address and user mobile phone number) that the certain API can output. However, when the certain requestor calls the certain API, in addition to the input parameters corresponding to the partial data content, input parameters corresponding to other data content (such as user address) in the whole data are also input to the certain API, and due to reasons such as omission in API rights management, the data (such as user gender and user address) returned by the certain API to the certain requestor exceeds the signed data range (such as user gender).
For another example, the API interface called by the requestor is configured with some old fields that are not updated (e.g., a service person concatenates the mobile phone number of the user and the identification number into one field), so that the range of the API interface output data (e.g., the mobile phone number of the user and the identification number) is inconsistent with the range of the subscription data (e.g., the mobile phone number of the user) of the requestor.
Based on this, the inventor proposes a risk assessment method and device for private data leakage. In an embodiment, fig. 1 is a schematic view illustrating an implementation scenario of a risk assessment method according to an embodiment, as shown in fig. 1, a requester may send an API call request (or called request message) to a service platform through a requester client, and accordingly, the service platform may return an API call response (or called response message) to the requester client. It will be appreciated that the gateway may record the request message and the response message. Based on the above, the risk assessment device can acquire the recorded request message and response message from the gateway, and analyze the acquired request message and response message to obtain analysis data; on the other hand, the risk assessment device can also acquire the authority data of the requesting party calling the API from the service platform. Further, the risk assessment means may assess the risk of data leakage for this API call based on the parsed data and the authority data. Therefore, the risk of private data leakage in the API call can be timely and accurately evaluated.
The following describes the implementation steps of the risk assessment method with reference to specific examples.
Fig. 2 shows a flowchart of a risk assessment method for private data leakage according to an embodiment, and an execution subject of the method may be any device or equipment or platform or server cluster with computing and processing capabilities, for example, the execution subject may be the risk assessment device shown in fig. 1, and may also be the service platform described above.
As shown in fig. 2, the method may include the steps of:
step S210, obtaining a request message for calling an API sent by a request direction to a service platform and a response message returned by the service platform aiming at the request message, wherein the request message is used for requesting privacy data of a target object; step S220, analyzing the request message and the response message to obtain analyzed data, wherein the analyzed data at least comprises a plurality of target APIs, input parameters aiming at the target APIs, and a plurality of privacy categories of the privacy data output through the target APIs; step S230, obtaining authority data of calling API of the requester from the service platform, wherein the authority data comprises an API set which the requester has the right to call, a parameter set formed by parameters which are transmitted by aiming at the API set and a privacy class set of output data corresponding to the parameter set; step S240, evaluating the data leakage risk of the API call at this time based on at least the analysis data and the authority data.
The steps are as follows:
first, in step S210, a request message for calling an API sent by a service platform to a requester and a response message returned by the service platform for the request message are obtained.
In one embodiment, where the requestor may be an individual or an organization or an enterprise, etc., it may log into the service platform through an account registered in the service platform and initiate an API call request in using the service platform.
In one embodiment, this step may include: and acquiring the request message and the response message from the service platform. In another embodiment, this step may include: and acquiring the request message and the response message from the gateway. On the other hand, in one embodiment, the API call network traffic data (including the request message and the response message) generated in real time in the service platform/gateway may be scanned at predetermined time intervals (e.g., 1S or 1 min). In another embodiment, the service platform/gateway may send the request message and the response message to a risk assessment device (e.g., the risk assessment device shown in fig. 1) in response to receiving the response message.
In one embodiment, this step may include: and acquiring a request message for requesting the privacy data of the target object and a corresponding response message from the service platform/gateway. In a specific embodiment, after receiving the API call request from the requestor, the service platform/gateway may extract a name (the name is also generally an interface address of the API) of a target API (which may be one or more) from the request header, locate the target API according to the name, further implement a call to the target API, and generate the response message. In addition, if the header of the request is found to further include a target object ID of the target object (usually, identification information allocated to the target object for the service platform), it may be determined that the API call request is for requesting private data of the target object. Based on this, in this step, the API call request may be obtained as the request message, and the response message corresponding to the API call request is obtained at the same time.
In another specific embodiment, the service platform/gateway may further pre-store a mapping relationship between the API and the data category, for example, the API a provides a privacy class, and the API b provides a non-privacy class. In this way, the service platform/gateway may determine a data type corresponding to the target API according to the name of the target API in the request header, and tag the API call request for calling the target API by using the data type, and further, in this step, an API call request and a corresponding API call response, of which the tagged data type includes a privacy class, may be obtained from the service platform/gateway and serve as the request message and the response message, respectively.
In the above, a request message for requesting the private data of the target object and a corresponding response message may be acquired.
Next, in step S220, the request message and the response message are analyzed to obtain analysis data.
In one embodiment, the request message includes a request header and a request body. In a specific embodiment, the names of the target APIs, the requester ID of the requester (usually, identification information assigned to the requester by the service platform), and the target object ID of the target object may be parsed from the header of the request, and included in the parsing data. In a particular embodiment, the input parameters for the target APIs may be parsed from the request body. It will be appreciated that the input parameters are used to import several target APIs to cause the several target APIs to output data content corresponding to the input parameters. In one example, the input parameters may include: phone, IDnumber, gender, etc.
In one embodiment, a response body is included in the response message. In a particular embodiment, the private data output by the target APIs may be parsed from the responder. Further, several privacy classes corresponding to the parsed privacy data may be determined. In a more specific embodiment, for any first field included in the parsed private data, the first field corresponds to a first field name, a first field size, and a first field type. For example, assume that the first field may be: phone, int, 18800008888, the field name of the first field is phone, the field content is 18800008888, the field size is 11 bits, and the first field type is int.
In one example, the first category corresponding to the first field name may be determined based on a mapping relationship between a preset field name and the privacy category, and the field names may be classified into the privacy categories. In a specific example, assume that the field names and privacy classes are mapped as shown in table 1:
TABLE 1
Name of field Privacy classes
phone Mobile phone number
gender Sex
ID_number Identity card number
addr Address
Assuming that the first field name is addr, the first category determined based on table 1 is an address, and the address is classified into several privacy categories corresponding to the privacy data.
In another example, the second category corresponding to the first field size may be determined based on a mapping relationship between a preset field size and privacy categories, and the second category is classified into the privacy categories. In a specific example, assume that the field size is mapped to the privacy class as shown in table 2:
TABLE 2
Field size Privacy classes
11 Mobile phone number
18 Identity card number
Assuming that the size of the first field is 11, the first category determined based on table 1 is a mobile phone number, and the mobile phone number is classified into several privacy categories corresponding to the above privacy data.
In yet another example, a third category corresponding to a combination including the first field size and the first field type may be determined based on a preset mapping relationship between the combination including the field size and the field type and the privacy categories, and the third category is classified into the privacy categories.
In another more specific embodiment, the parsing out the privacy data includes a plurality of fields, and further, in an example, determining the privacy classes corresponding to the privacy data may include: and determining that a plurality of fields in the plurality of fields correspond to a plurality of fourth categories in the plurality of privacy categories based on a checking algorithm preset for the plurality of privacy categories, and classifying the fields into the plurality of privacy categories. In a specific example, the check algorithm may be implemented by using a User-Defined Function (UDF). Specifically, the field content may be calculated by using a custom function to determine whether the corresponding field is the privacy category corresponding to the custom function. In a more specific example, a user-defined function for verifying the validity of the id number may be set by using the encoding rule of the valid id number and the information such as the check code therein, and then the user-defined function may be used to calculate the contents of the fields corresponding to the fields, so as to determine whether the field content of a certain field is the id number.
In another example, determining the number of privacy classes to which the privacy data corresponds may include: determining that a plurality of fields in the plurality of fields match a plurality of regular items in the plurality of regular items based on a plurality of preset regular items, and determining a plurality of fifth categories corresponding to the regular items based on the mapping relation between the regular items and the privacy categories, and classifying the fifth categories into the privacy categories. In a specific example, the plurality of regular terms include a regular term set based on characteristics of the mobile phone number, and are used to screen out a field including 11 digits, the first digit of which is 1, and the first three digits of which belong to field contents of an existing network number (e.g., the chinese mobile network number 138, 139, etc.), obviously, the privacy category corresponding to the regular term screened out of the field is the mobile phone number, and then the mobile phone number is classified into the privacy categories. In the above, the request message and the response message are analyzed to obtain the analysis data. In particular, parsing the data may include one or more of: the name of a plurality of target APIs, the requester ID of a requester, the target object ID of a target object, input parameters of the plurality of target APIs, private data output through the plurality of target APIs, and a plurality of privacy classes corresponding to the private data.
After the step S210 is executed, step S230 may be executed to obtain the authority data of the requestor calling the API from the service platform.
Specifically, the permission data includes an API set that the requestor has the right to call, a parameter set composed of parameters that the API set has the right to enter, and a privacy class set of output data corresponding to the parameter set. In one example, where the API set may include one or more names of APIs, such as http:// yiteng. cn/data/? 91, https:// niubi. cn/data/? id 8, etc. In one example, the parameters in the parameter set may include: builder, phone, and add. In one example, the privacy classes in the set of privacy classes can include gender, telephone, and address.
In one embodiment, the service platform includes a user authorization system, a subscription system, an API management system, and the like. It should be understood that, in the user authorization system, part of the private data authorized by the individual user or the enterprise user to allow the service platform to provide externally may be stored. The data range which can be obtained by the requester and the service platform request can be stored in the subscription system. The API management system comprises information such as API interface documents which can be provided by the service platform for the calling of the requester. Based on the authority data, the related data can be respectively obtained from the systems and then classified into the authority data after being sorted.
In this way, the authority data of the requesting party calling the API can be obtained from the service platform.
The above may get the parsed data in step S220, and the authority data may get in step S230. Next, in step S240, a risk of data leakage of the API call is evaluated based on at least the parsing data and the permission data.
In one embodiment, the risk of data leakage for this API call may be evaluated based on the parsed data and the permission data. In a specific embodiment, the analytic data and the authority data may be input into a first risk assessment model trained in advance to obtain a first prediction result indicating the risk of data leakage. In a more specific embodiment, the first risk assessment model may employ a machine learning algorithm such as a decision tree, a random forest, an adboost, a neural network, or the like. In a more specific embodiment, the first prediction result may be a risk classification level, such as high, medium, low, etc. In another more specific embodiment, the first predicted outcome may be a risk assessment score, such as 20 or 85, etc. It should be noted that the using process and the training process of the first risk assessment model are similar, and therefore, the training process is not described in detail.
In another specific embodiment, the analysis data and the authority data may be compared to obtain a comparison result, and the data leakage risk may be evaluated based on the comparison result.
In a more specific embodiment, the comparing the parsed data with the authority data to obtain a comparison result may include: and judging whether the target APIs belong to the API set or not to obtain API comparison pair results aiming at the API comparison items. In one example, assume that the target APIs include http:// user. cn/data/? id 00, and http:// user. cn/data/? id 00 and http:// company. cn/data/? And d, determining that all the target APIs belong to the API set and the number of the target APIs which do not belong to the API set is 0 through alignment, and thus determining that the API comparison pair result is 0.
Obtaining the alignment result may further comprise: and judging whether the input parameters of the target APIs belong to the parameter set or not to obtain a parameter comparison pair sub-result aiming at the parameter comparison item. In one example, assuming that the input parameters of the target APIs include phone and IDnumber, and the phone is included in the parameter set, the IDnumber may be determined not to belong to the parameter set by the comparison, and thus the parameter comparison pair result may be determined to be 1.
Obtaining the alignment result may further comprise: and judging whether the privacy classes belong to the privacy class set or not to obtain a class comparison pair sub-result aiming at the class comparison item. In an example, assuming that the plurality of privacy categories include a mobile phone number and an identity card number, and the set of privacy categories includes the mobile phone number, the identity card number may be determined not to belong to the set of privacy categories by comparison, and thus the result of comparing the privacy categories may be determined to be 1. According to a more specific embodiment, in the case that it is determined that there is an out-of-set privacy category that does not belong to the privacy category set, a mapping relationship between a preset privacy category and a privacy sensitivity may be obtained; and determining the privacy sensitivity corresponding to the privacy classes outside the set based on the mapping relation, and classifying the privacy sensitivity into the class comparison pair result.
In one example, in the case that it is determined that there is an out-of-set privacy category (e.g., identification number), a mapping relationship between a preset privacy category and privacy sensitivity, such as the mapping relationship shown in table 3, may be obtained.
TABLE 3
Privacy classes Privacy sensitivity
Mobile phone number 5
Sex 1
Identity card number 5
Address 3
Based on the mapping relationship shown in table 3, it may be determined that the privacy sensitivity corresponding to the identification number is 5, and 5 is determined as a class comparison pair result.
The API comparison pair sub-result, the parameter comparison pair sub-result and the class comparison pair sub-result can be obtained as the comparison result.
Further, wherein based on the comparison results, assessing the data leakage risk may comprise: and weighting the API comparison pair sub-result, the parameter comparison pair sub-result and the category comparison pair sub-result based on weights pre-allocated to the API comparison item, the parameter comparison item and the category comparison item to obtain an evaluation score for the data leakage risk. In one example, assuming that the weights pre-assigned for the API comparison item, the parameter comparison item and the category comparison item are 0.2, 0.3 and 0.5 respectively, the API comparison pair sub-result, the parameter comparison pair sub-result and the category comparison pair sub-result are 0, 1 and 5 respectively, the corresponding data leakage risk score is 2.8.
On the other hand, in one embodiment, the risk of data leakage of the API call may be evaluated based on the matching degree between the parsing data and the authority data and the attribute information of the requester and the attribute information of the target object. It should be understood that the instant requester calls the data with the use right through the API, but if the acquired data is data that cannot be used by the service, there is a risk of data leakage such as data sale. For example, the requestor is an offline store in Hangzhou, but it is not uncommon to invoke address information for a user who normally resides in Beijing. In this regard, the matching degree between the attribute information of the requester and the target object can be introduced to evaluate the risk of data leakage, thereby further improving the reliability and usability of the evaluation result. Generally, the lower the degree of matching, the higher the risk of data leakage.
In a specific embodiment, after obtaining the requester ID and the target object ID, the corresponding requester attribute information may be obtained from the service platform based on the requester ID, and the corresponding object attribute information may be obtained from the service platform based on the target object ID. And then determining the matching degree between the attribute information of the requester and the attribute information of the object. In a more specific embodiment, wherein the requester attribute information includes one or more of: the scale of the requester, the industry of the requester, the region of the requester where the requester is registered, the region of the requester entity, the type of the requester application, and the vulnerability scanning condition of the requester application. In one example, the requester may be a business, and the attribute information may include more than 500 businesses, hair salon, hangzhou brick and mortar stores, consumer applications, and application non-vulnerability. In a more specific embodiment, the target object may include a target user, the target object ID includes a user ID of the target user, the object attribute data includes user personal information, and the user personal information includes one or more of: age, gender, occupation, hobbies, frequent location, service usage preferences, service usage records. In one example, the personal information of a certain user may include: age 35, male, programmer, favorite food, Beijing, 3 hairpieces have been purchased within the last month.
In a more specific embodiment, wherein determining the degree of matching between the requester attribute information and the object attribute information may include: and inputting the attribute information of the requester and the attribute information of the object into a pre-trained matching degree prediction model to obtain the matching degree between the requester and the object. In one example, the match prediction model may employ a logistic regression algorithm. In another more specific embodiment, determining the matching degree may further include: and calculating the matching degree between the attribute information of the requester and the attribute information of the object based on a matching degree algorithm. In one example, a manhattan distance or a euclidean distance between the feature vector corresponding to the requester attribute information and the feature vector corresponding to the object attribute information may be calculated as the matching degree.
The matching degree can be obtained in this way. Further, in a specific embodiment, the evaluation score for the data leakage risk may be obtained by performing weighting processing on the matching degree and the API comparison pair sub-result, the parameter comparison pair sub-result, and the category comparison pair sub-result based on weights pre-assigned to the matching degree and the API comparison item, the parameter comparison item, and the category comparison item. In one example, the inverse of the degree of matching may be determined, and the weighting process may be performed based on the inverse of the degree of matching. In a specific example, assuming that the weights pre-assigned for the matching degree, the API alignment, the parameter alignment and the class alignment are 0.3, 0.1 and 0.4 respectively, the API comparison pair result, the parameter comparison pair result and the class comparison pair result are 0.4, 0, 1 and 5 respectively, the corresponding data leakage risk score of 0.3 (1/0.4) +0.1 +0.2 + 1+0.4 + 5 is obtained as 2.95.
In another specific embodiment, the comparison result and the matching degree may be input into a second risk assessment model trained in advance to obtain a second prediction result indicating the data leakage risk.
In a more specific embodiment, the second risk assessment model may employ a decision tree, a random forest, an adboost, a neural network, or other machine learning algorithm. In a more specific embodiment, the second prediction result may be a risk classification level, such as extremely high, medium, low, extremely low, etc. In another more specific embodiment, the second prediction result may be a risk assessment score, such as 15 or 90, etc. It should be noted that the use process and the training process of the second risk assessment model are similar, and therefore, the training process is not described in detail. Therefore, the data leakage risk of the call can be evaluated based on the comparison result and the matching degree.
In the above, based on the analysis data and the permission data, or based on the attribute information of the requester and the attribute information of the target object, the evaluation of the data leakage risk of the API call is realized. It is to be understood that the relevant data in the requester historical API call process may also be used in evaluating the risk of data leakage for this API call.
In summary, by using the risk assessment method for private data leakage disclosed in the embodiments of the present specification, the risk of private data leakage caused by API call can be assessed timely and accurately, so as to effectively prevent private data leakage.
According to another aspect of embodiments, the present specification also discloses an evaluation device. In particular, fig. 3 shows a block diagram of a risk assessment device for private data leakage according to one embodiment. As shown in fig. 3, the apparatus 300 may include:
the message obtaining unit 310 is configured to obtain a request message for calling an application program interface API sent by a requester to a service platform, and a response message returned by the service platform for the request message, where the request message is used to request private data of a target object. The analysis unit 320 is configured to analyze the request message and the response message to obtain analysis data, where the analysis data at least includes a plurality of target APIs, input parameters for the target APIs, and a plurality of privacy categories of the privacy data output through the target APIs; the permission obtaining unit 330 is configured to obtain permission data for calling the API by the requestor from the service platform, where the permission data includes an API set that the requestor has permission to call, a parameter set composed of parameters that the API set has permission to enter, and a privacy class set of output data corresponding to the parameter set. The evaluation unit 340 is configured to evaluate the risk of data leakage of the API call based on at least the parsing data and the permission data.
In an embodiment, the parsing unit 320 specifically includes: a first parsing subunit 321 configured to parse the input parameter included in the request message, and include the parsed data; a second parsing subunit 322 configured to parse out the privacy data included in the response message; a determining subunit 323 configured to determine the privacy classes corresponding to the privacy data, and to attribute the privacy classes to the parsed data.
In a specific embodiment, the private data includes an arbitrary first field, and the first field corresponds to a first field name, a first field size, and a first field type; wherein the determining subunit 323 is specifically configured to: determining a first category corresponding to the first field name based on a preset mapping relation between the field name and the privacy category, and classifying the first category into the privacy categories; or, determining a second category corresponding to the first field size based on a preset mapping relation between the field size and the privacy categories, and classifying the second category into the privacy categories; or, determining a third category corresponding to a combination comprising the first field size and the first field type based on a preset mapping relation between the combination comprising the field size and the field type and the privacy categories, and classifying the third category into the privacy categories.
In a specific embodiment, the private data includes a plurality of fields, wherein the determining subunit is specifically configured to: determining that a plurality of fields in the plurality of fields correspond to a plurality of fourth categories in the plurality of privacy categories based on a checking algorithm preset for the plurality of privacy categories, and classifying the fields into the plurality of privacy categories; and/or determining that a plurality of fields in the plurality of fields match a plurality of regular items in the plurality of regular items based on a plurality of preset regular items, and determining a plurality of fifth categories corresponding to the regular items based on the mapping relation between the regular items and the privacy categories, and classifying the fifth categories into the privacy categories.
In a specific embodiment, the evaluation unit 340 is specifically configured to: and inputting the analysis data and the authority data into a pre-trained first risk assessment model together to obtain a first prediction result, and indicating the data leakage risk.
In one embodiment, the evaluation unit 340 specifically includes: a comparison subunit 341 configured to compare the analysis data with the permission data to obtain a comparison result; an evaluation subunit 342 configured to evaluate the data leakage risk based on at least the comparison result.
In a specific embodiment, the ratio subunit 341 is specifically configured to: judging whether the target APIs belong to the API set or not, obtaining API comparison pair sub-results aiming at API comparison items, and classifying the API comparison pair sub-results into the comparison results; judging whether the input parameters of the target APIs belong to the parameter set or not, obtaining a parameter comparison pair result aiming at a parameter comparison item, and classifying the result into the comparison result; and judging whether the privacy classes belong to the privacy class set or not, obtaining class comparison pair sub-results aiming at the class comparison items, and classifying the class comparison pair sub-results into the comparison results.
In a more specific embodiment, the specific subunit 341 is further configured to: under the condition that the privacy classes outside the set which do not belong to the privacy class set are judged to exist in the plurality of privacy classes, acquiring a preset mapping relation between the privacy classes and privacy sensitivity; and determining the privacy sensitivity corresponding to the privacy classes outside the set based on the mapping relation, and classifying the privacy sensitivity into the class comparison pair result.
In another specific embodiment, the parsing data further includes a requester ID of the requester and a target object ID of the target object; the apparatus 300 further comprises: an attribute obtaining unit 350 configured to obtain corresponding requester attribute information from the service platform based on the requester ID and obtain corresponding object attribute information from the service platform based on the target object ID; a matching degree determination unit 360 configured to determine a matching degree between the requester attribute information and the object attribute information; the evaluation unit 340 is specifically configured to: and evaluating the data leakage risk based on the comparison result and the matching degree.
In a more specific embodiment, the matching degree determining unit 360 is specifically configured to: inputting the attribute information of the requester and the attribute information of the object into a pre-trained matching degree prediction model to obtain the matching degree between the requester and the object; or calculating the matching degree between the attribute information of the requester and the attribute information of the object based on a matching degree algorithm.
In another more specific embodiment, the requester attribute information includes one or more of: the scale of the requester, the industry of the requester, the region where the requester is registered, the region where the requester entity is located, the application type of the requester and the vulnerability scanning condition of the requester; and/or the target object comprises a target user, the target object ID comprises a user ID of the target user, the object attribute data comprises user personal information, and the user personal information comprises one or more of the following: age, gender, occupation, hobbies, frequent location, service usage preferences, service usage records.
In yet another more specific embodiment, the evaluation unit 340 is specifically configured to: and weighting the API comparison pair sub-result, the parameter comparison pair sub-result, the category comparison pair sub-result and the matching degree based on weights pre-distributed for the API comparison item, the parameter comparison item, the category comparison item and the matching degree to obtain an evaluation score for the data leakage risk.
In yet another more specific embodiment, the evaluation unit 340 is specifically configured to: and inputting the comparison result and the matching degree into a pre-trained second risk assessment model together to obtain a second prediction result, and indicating the data leakage risk.
In summary, with the risk assessment device for private data leakage disclosed in the embodiments of the present specification, the risk of private data leakage occurring due to API call can be assessed timely and accurately, so as to effectively prevent private data leakage.
According to an embodiment of another aspect, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method described in connection with fig. 2.
According to an embodiment of yet another aspect, there is also provided a computing device comprising a memory and a processor, the memory having stored therein executable code, the processor, when executing the executable code, implementing the method described in connection with fig. 2.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in this invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present invention should be included in the scope of the present invention.

Claims (22)

1. A risk assessment method for private data leakage, comprising:
acquiring a request message for calling an Application Program Interface (API) sent by a requester to a service platform and a response message returned by the service platform aiming at the request message, wherein the request message is used for requesting privacy data of a target object;
analyzing the request message and the response message to obtain analyzed data, wherein the analyzed data at least comprises a plurality of target APIs, input parameters aiming at the target APIs, a plurality of privacy categories of the privacy data output through the target APIs, a requester ID of the requester and a target object ID of the target object;
acquiring authority data for calling API by the requester from the service platform, wherein the authority data comprises an API set which the requester has the right to call, a parameter set formed by parameters which are transmitted by the API set in a right way, and a privacy class set of output data corresponding to the parameter set;
comparing the analytic data with the authority data to obtain a comparison result;
acquiring corresponding attribute information of the requester from the service platform based on the ID of the requester; acquiring corresponding object attribute information from the service platform based on the target object ID; determining the matching degree between the attribute information of the requester and the attribute information of the object;
and evaluating the data leakage risk of the API call based on the comparison result and the matching degree.
2. The method of claim 1, wherein parsing the request message and the response message to obtain parsed data comprises:
analyzing the input parameters included in the request message and putting the input parameters into the analysis data;
and analyzing the privacy data included in the response message, determining the privacy classes corresponding to the privacy data, and classifying the privacy classes into the analyzed data.
3. The method of claim 2, wherein the private data includes an arbitrary first field, the first field corresponding to a first field name, a first field size, and a first field type;
determining a plurality of privacy categories corresponding to the privacy data includes:
determining a first category corresponding to the first field name based on a preset mapping relation between the field name and the privacy category, and classifying the first category into the privacy categories; or the like, or, alternatively,
determining a second type corresponding to the first field size based on a preset mapping relation between the field size and the privacy types, and classifying the second type into the privacy types; or the like, or, alternatively,
and determining a third category corresponding to the combination comprising the first field size and the first field type based on a preset mapping relation between the combination comprising the field size and the field type and the privacy categories, and classifying the third category into the privacy categories.
4. The method of claim 2 or 3, wherein the privacy data includes a plurality of fields, wherein determining the number of privacy classes to which the privacy data corresponds comprises:
determining that a plurality of fields in the plurality of fields correspond to a plurality of fourth categories in the plurality of privacy categories based on a checking algorithm preset for the plurality of privacy categories, and classifying the fields into the plurality of privacy categories; and/or the presence of a gas in the gas,
determining that a plurality of fields in the plurality of fields match a plurality of regular items in the plurality of regular items based on a plurality of preset regular items, and determining a plurality of fifth categories corresponding to the regular items based on the mapping relation between the regular items and the privacy categories, and classifying the fifth categories into the privacy categories.
5. The method of claim 1, wherein comparing the parsed data with the permission data to obtain a comparison result comprises:
judging whether the target APIs belong to the API set or not, obtaining API comparison pair sub-results aiming at API comparison items, and classifying the API comparison pair sub-results into the comparison results;
judging whether the input parameters of the target APIs belong to the parameter set or not, obtaining a parameter comparison pair result aiming at a parameter comparison item, and classifying the result into the comparison result;
and judging whether the privacy classes belong to the privacy class set or not, obtaining class comparison pair sub-results aiming at the class comparison items, and classifying the class comparison pair sub-results into the comparison results.
6. The method of claim 5, wherein determining whether the privacy classes belong to the privacy class set, resulting in a class comparison pair sub-result for a class comparison term comprises:
under the condition that the privacy classes outside the set which do not belong to the privacy class set are judged to exist in the plurality of privacy classes, acquiring a preset mapping relation between the privacy classes and privacy sensitivity;
and determining the privacy sensitivity corresponding to the privacy classes outside the set based on the mapping relation, and classifying the privacy sensitivity into the class comparison pair result.
7. The method of claim 1, wherein determining a degree of match between the requestor attribute information and object attribute information comprises:
inputting the attribute information of the requester and the attribute information of the object into a pre-trained matching degree prediction model to obtain the matching degree between the requester and the object; or the like, or, alternatively,
and calculating the matching degree between the attribute information of the requester and the attribute information of the object based on a matching degree algorithm.
8. The method of claim 1, wherein the requestor attribute information comprises one or more of: the scale of the requester, the industry of the requester, the region where the requester is registered, the region where the requester entity is located, the application type of the requester and the vulnerability scanning condition of the requester; and/or the presence of a gas in the gas,
the target object comprises a target user, the target object ID comprises a user ID of the target user, the object attribute information comprises user personal information, and the user personal information comprises one or more of the following: age, gender, occupation, hobbies, frequent location, service usage preferences, service usage records.
9. The method of claim 1, wherein assessing the risk of data leakage based on the alignment and the degree of match comprises:
and weighting the API comparison pair sub-result, the parameter comparison pair sub-result, the category comparison pair sub-result and the matching degree based on weights pre-distributed for the API comparison item, the parameter comparison item, the category comparison item and the matching degree to obtain an evaluation score for the data leakage risk.
10. The method of claim 1, wherein assessing the risk of data leakage based on the alignment and the degree of match comprises:
and inputting the comparison result and the matching degree into a pre-trained second risk assessment model together to obtain a second prediction result, and indicating the data leakage risk.
11. A risk assessment apparatus for private data leakage, comprising:
the information acquisition unit is configured to acquire a request message for calling an Application Program Interface (API) sent to a service platform by a request party and a response message returned by the service platform aiming at the request message, wherein the request message is used for requesting the privacy data of a target object;
the analysis unit is configured to analyze the request message and the response message to obtain analysis data, wherein the analysis data at least comprises a plurality of target APIs, input parameters aiming at the target APIs, a plurality of privacy categories of the privacy data output by the target APIs, a requester ID of the requester and a target object ID of the target object;
the authority acquiring unit is configured to acquire authority data for calling an API (application programming interface) from the service platform, wherein the authority data comprises an API set which the requester has the right to call, a parameter set formed by parameters which are transmitted for the API set and a privacy class set of output data corresponding to the parameter set;
the attribute acquisition unit is configured to acquire corresponding requester attribute information from the service platform based on the requester ID and acquire corresponding object attribute information from the service platform based on the target object ID;
a matching degree determination unit configured to determine a matching degree between the requester attribute information and the object attribute information;
the evaluation unit is configured to evaluate the data leakage risk of the API call at this time at least based on the analysis data and the authority data; the evaluation unit specifically comprises: the comparison subunit is configured to compare the analysis data with the authority data to obtain a comparison result; and the evaluation subunit is configured to evaluate the data leakage risk based on the comparison result and the matching degree.
12. The apparatus according to claim 11, wherein the parsing unit specifically includes:
a first parsing subunit configured to parse the input parameter included in the request message, and enter the parsed data;
a second parsing subunit configured to parse the privacy data included in the response message;
a determining subunit configured to determine the plurality of privacy categories corresponding to the privacy data, and to attribute the privacy categories to the analysis data.
13. The apparatus of claim 12, wherein the private data includes an arbitrary first field, the first field corresponding to a first field name, a first field size, and a first field type;
wherein the determining subunit is specifically configured to:
determining a first category corresponding to the first field name based on a preset mapping relation between the field name and the privacy category, and classifying the first category into the privacy categories; or the like, or, alternatively,
determining a second type corresponding to the first field size based on a preset mapping relation between the field size and the privacy types, and classifying the second type into the privacy types; or the like, or, alternatively,
and determining a third category corresponding to the combination comprising the first field size and the first field type based on a preset mapping relation between the combination comprising the field size and the field type and the privacy categories, and classifying the third category into the privacy categories.
14. The apparatus according to claim 12 or 13, wherein the privacy data comprises a plurality of fields, wherein the determining subunit is specifically configured to:
determining that a plurality of fields in the plurality of fields correspond to a plurality of fourth categories in the plurality of privacy categories based on a checking algorithm preset for the plurality of privacy categories, and classifying the fields into the plurality of privacy categories; and/or the presence of a gas in the gas,
determining that a plurality of fields in the plurality of fields match a plurality of regular items in the plurality of regular items based on a plurality of preset regular items, and determining a plurality of fifth categories corresponding to the regular items based on the mapping relation between the regular items and the privacy categories, and classifying the fifth categories into the privacy categories.
15. The apparatus of claim 11, wherein the ratio pair subunit is specifically configured to:
judging whether the target APIs belong to the API set or not, obtaining API comparison pair sub-results aiming at API comparison items, and classifying the API comparison pair sub-results into the comparison results;
judging whether the input parameters of the target APIs belong to the parameter set or not, obtaining a parameter comparison pair result aiming at a parameter comparison item, and classifying the result into the comparison result;
and judging whether the privacy classes belong to the privacy class set or not, obtaining class comparison pair sub-results aiming at the class comparison items, and classifying the class comparison pair sub-results into the comparison results.
16. The apparatus of claim 15, wherein the proportional sub-unit is further configured to:
under the condition that the privacy classes outside the set which do not belong to the privacy class set are judged to exist in the plurality of privacy classes, acquiring a preset mapping relation between the privacy classes and privacy sensitivity;
and determining the privacy sensitivity corresponding to the privacy classes outside the set based on the mapping relation, and classifying the privacy sensitivity into the class comparison pair result.
17. The apparatus according to claim 11, wherein the matching degree determining unit is specifically configured to:
inputting the attribute information of the requester and the attribute information of the object into a pre-trained matching degree prediction model to obtain the matching degree between the requester and the object; or the like, or, alternatively,
and calculating the matching degree between the attribute information of the requester and the attribute information of the object based on a matching degree algorithm.
18. The apparatus of claim 11, wherein the requestor attribute information comprises one or more of: the scale of the requester, the industry of the requester, the region where the requester is registered, the region where the requester entity is located, the application type of the requester and the vulnerability scanning condition of the requester; and/or the presence of a gas in the gas,
the target object comprises a target user, the target object ID comprises a user ID of the target user, the object attribute information comprises user personal information, and the user personal information comprises one or more of the following: age, gender, occupation, hobbies, frequent location, service usage preferences, service usage records.
19. The apparatus according to claim 11, wherein the evaluation unit is specifically configured to:
and weighting the API comparison pair sub-result, the parameter comparison pair sub-result, the category comparison pair sub-result and the matching degree based on weights pre-distributed for the API comparison item, the parameter comparison item, the category comparison item and the matching degree to obtain an evaluation score for the data leakage risk.
20. The apparatus according to claim 11, wherein the evaluation unit is specifically configured to:
and inputting the comparison result and the matching degree into a pre-trained second risk assessment model together to obtain a second prediction result, and indicating the data leakage risk.
21. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when executed in a computer, causes the computer to perform the method of any of claims 1-10.
22. A computing device comprising a memory and a processor, wherein the memory has stored therein executable code that when executed by the processor implements the method of any of claims 1-10.
CN201911226781.5A 2019-12-04 2019-12-04 Risk assessment method and device for private data leakage Active CN111027094B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911226781.5A CN111027094B (en) 2019-12-04 2019-12-04 Risk assessment method and device for private data leakage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911226781.5A CN111027094B (en) 2019-12-04 2019-12-04 Risk assessment method and device for private data leakage

Publications (2)

Publication Number Publication Date
CN111027094A CN111027094A (en) 2020-04-17
CN111027094B true CN111027094B (en) 2021-07-02

Family

ID=70207858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911226781.5A Active CN111027094B (en) 2019-12-04 2019-12-04 Risk assessment method and device for private data leakage

Country Status (1)

Country Link
CN (1) CN111027094B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111753330B (en) * 2020-06-18 2023-08-29 百度在线网络技术(北京)有限公司 Determination method, apparatus, device and readable storage medium for data leakage main body
CN113221098A (en) * 2021-05-06 2021-08-06 支付宝(杭州)信息技术有限公司 Processing method and device for interface call request
CN113098986B (en) * 2021-06-10 2021-08-24 睿至科技集团有限公司 Data sharing and exchanging method and system based on Internet of things
CN115208835A (en) * 2022-05-31 2022-10-18 奇安信科技集团股份有限公司 API classification method, device, electronic equipment, medium and product
CN115622764A (en) * 2022-10-09 2023-01-17 深圳市君思科技有限公司 Method for discovering and classifying private data in web network flow
CN115408702B (en) * 2022-11-01 2023-02-14 浙江城云数字科技有限公司 Stacking interface operation risk grade evaluation method and application thereof
CN115987690B (en) * 2023-03-20 2023-08-08 天聚地合(苏州)科技股份有限公司 Privacy computing method based on API, API calling terminal and API providing terminal

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200166A (en) * 2014-08-05 2014-12-10 杭州安恒信息技术有限公司 Script-based website vulnerability scanning method and system
CN110287729A (en) * 2019-06-15 2019-09-27 复旦大学 A kind of privacy leakage methods of risk assessment of data-oriented use demand

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170270318A1 (en) * 2016-03-15 2017-09-21 Stuart Ritchie Privacy impact assessment system and associated methods

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104200166A (en) * 2014-08-05 2014-12-10 杭州安恒信息技术有限公司 Script-based website vulnerability scanning method and system
CN110287729A (en) * 2019-06-15 2019-09-27 复旦大学 A kind of privacy leakage methods of risk assessment of data-oriented use demand

Also Published As

Publication number Publication date
CN111027094A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
CN111027094B (en) Risk assessment method and device for private data leakage
CN108876133B (en) Risk assessment processing method, device, server and medium based on business information
CN108156237B (en) Product information pushing method and device, storage medium and computer equipment
US9852427B2 (en) Systems and methods for sanction screening
WO2021098274A1 (en) Method and apparatus for evaluating risk of leakage of private data
US9818116B2 (en) Systems and methods for detecting relations between unknown merchants and merchants with a known connection to fraud
WO2021047326A1 (en) Information recommendation method and apparatus, computer device, and storage medium
US20140181007A1 (en) Trademark reservation system
CN111324370B (en) Method and device for carrying out risk processing on to-be-on-line small program
US20210019742A1 (en) Customer acquisition without initially receiving personally identifiable information (pii)
CN111625809A (en) Data authorization method and device, electronic equipment and storage medium
CN112685774B (en) Payment data processing method based on big data and block chain finance and cloud server
CN111859371A (en) Privacy risk assessment method and device of application program and storage medium
CN111553701A (en) Session-based risk transaction determination method and device
CN110909384A (en) Method and device for determining business party revealing user information
CN113553583A (en) Information system asset security risk assessment method and device
CN114116802A (en) Data processing method, device, equipment and storage medium of Flink computing framework
CN111489175B (en) Online identity authentication method, device, system and storage medium
CN112613893A (en) Method, system, equipment and medium for identifying malicious user registration
CN112632409A (en) Same user identification method, device, computer equipment and storage medium
CN109636578A (en) Risk checking method, device, equipment and the readable storage medium storing program for executing of credit information
CN113254837A (en) Application program evaluation method, device, system, equipment and medium
CN114238280A (en) Method and device for constructing financial sensitive information standard library and electronic equipment
CN109636574B (en) Credit information risk detection method, apparatus, device and storage medium
CN109636575B (en) Terminal risk detection method, device, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40028006

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant