CN111611590B - Method and device for data security related to application program - Google Patents

Method and device for data security related to application program Download PDF

Info

Publication number
CN111611590B
CN111611590B CN202010440451.2A CN202010440451A CN111611590B CN 111611590 B CN111611590 B CN 111611590B CN 202010440451 A CN202010440451 A CN 202010440451A CN 111611590 B CN111611590 B CN 111611590B
Authority
CN
China
Prior art keywords
application
sensitive information
information
marked
database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010440451.2A
Other languages
Chinese (zh)
Other versions
CN111611590A (en
Inventor
朱浩文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202010440451.2A priority Critical patent/CN111611590B/en
Publication of CN111611590A publication Critical patent/CN111611590A/en
Application granted granted Critical
Publication of CN111611590B publication Critical patent/CN111611590B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
    • G06F21/577Assessing vulnerabilities and evaluating computer system security
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/302Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a software system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6227Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database where protection concerns the structure of data, e.g. records, types, queries

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Storage Device Security (AREA)

Abstract

A method of data security involving an application, the application comprising a code base and being associated with a database, comprising: marking sensitive information in the database; scanning the code library based on the marked sensitive information to determine and mark application information related to the marked sensitive information; based on the marked application information, conducting investigation of sensitive information leakage paths; using a security test to determine if the application has a response that is not determined by the troubleshooting to be a sensitive information leakage pathway, but contains marked sensitive information; and detecting the application to find a further sensitive information leakage path based on the response not being determined by the investigation as a sensitive information leakage path but containing marked sensitive information. Other aspects of the disclosure also relate to corresponding devices.

Description

Method and device for data security related to application program
Technical Field
The present disclosure relates generally to data security, and more particularly to discovery and monitoring of user privacy data leakage paths.
Background
With the importance and application of big data technology in the internet industry, more and more data are obtained from users in various internet companies. However, some internet companies grasping a large amount of user data have insufficient importance on data security, and lack effective discovery and monitoring capabilities for user data leakage paths, resulting in frequent data leakage events.
The data leakage mode is many, such as: partner leakage, leakage caused by defects in business logic, and leakage caused by hacking in application codes, the key to preventing leakage is to find the leakage scene of the user information.
Many companies lack the ability to monitor and discover the leakage of private information from users, and while having some ability to monitor and discover, a small portion is relatively single and unsophisticated.
Thus, there is a need in the art to improve the discovery and monitoring of user privacy data leakage paths to increase the overall level of data security.
Disclosure of Invention
An aspect of the present disclosure relates to a method involving data security of an application, the application including a code base and being associated with a database, comprising: marking sensitive information in the database; scanning the code library based on the marked sensitive information to determine and mark application information related to the marked sensitive information; based on the marked application information, conducting investigation of sensitive information leakage paths; using a security test to determine if the application has a response that is not determined by the troubleshooting to be a sensitive information leakage pathway, but contains marked sensitive information; and detecting the application to find a further sensitive information leakage path based on the response not being determined by the investigation as a sensitive information leakage path but containing marked sensitive information.
According to an exemplary embodiment, the method further comprises: publishing and running the application; and performing at least one of the following to further discover and monitor sensitive information leakage paths: monitoring the log of the database to determine whether unlabeled application information relates to the marked sensitive information, and supplementing and analyzing the unlabeled application information; and performing flow detection on the application program to monitor the output of each interface and page of the application program to the marked sensitive information, and warehousing the flow associated with the output of the sensitive information.
According to a further exemplary embodiment, analyzing the unmarked application information comprises determining whether the unmarked application information is improperly accessed or manipulated by hacking.
According to a further exemplary embodiment, analyzing the untagged application information comprises determining whether the application is untagged due to a sensitive information miss-tag in the database.
According to a further exemplary embodiment, if there is a missing mark of sensitive information in the database, a full code scan of the code library for missing marked sensitive information is performed to find further missing marks of sensitive information or sensitive information leakage paths. According to an exemplary embodiment, the method further comprises: and carrying out black box detection on the flow rate associated with the output of the sensitive information to determine whether the risk of causing the sensitive information to leak exists.
According to an exemplary embodiment, marking sensitive information in the database further comprises storing the marked sensitive information in a marked sensitive information list; and marking sensitive information of the incremental database table and/or the modified database table.
According to an exemplary embodiment, marking application information related to the marked sensitive information comprises storing the application information in a marked application information list, and marking untagged application information related to the marked sensitive information comprises adding untagged application information related to the marked sensitive information to the marked application information list.
According to an exemplary embodiment, the sensitive information comprises a sensitive table of the database and sensitive fields.
According to an exemplary embodiment, the application information includes pages and interfaces of the application program.
According to an exemplary embodiment, the method further comprises performing offline analysis and data leakage tracing on the traffic associated with the output of the sensitive information in the warehouse.
According to an exemplary embodiment, monitoring the log of the database further comprises: determining an interface or page that repeatedly or largely queries the database and returns the marked sensitive information; and recording the information of the interface or the page and alarming.
According to an exemplary embodiment, the traffic detection of the application further comprises monitoring one or more of the following or any combination thereof in the application: a. a single request returns a plurality of pieces of user privacy data; b. the leakage amount of the user privacy data of each interface per day exceeds a threshold value; c. the access times of each IP of each user to the private data of the user are abnormal; accessing whether through a crawler or other automated tool; and when any of the above conditions is monitored, relevant information is recorded and an alarm is given.
Other aspects of the disclosure also relate to corresponding devices.
Drawings
Fig. 1 illustrates a schematic diagram of a discovery and monitoring system architecture of a user privacy information leakage pathway in accordance with an aspect of the present disclosure.
Fig. 2 illustrates a schematic diagram of a discovery and monitoring system of user privacy information leakage paths in accordance with an aspect of the present disclosure.
Fig. 3 illustrates a schematic diagram of the operation of an IAST module according to an exemplary aspect of the present disclosure.
Fig. 4 illustrates a schematic diagram of the operation of a DAST module according to an exemplary aspect of the present disclosure.
Fig. 5A illustrates a flow chart of a method of discovery and monitoring of user privacy information leakage paths in accordance with an exemplary aspect of the present disclosure.
Fig. 5B illustrates a flow chart of a method of discovery and monitoring of user privacy information leakage paths in accordance with an exemplary aspect of the present disclosure.
Fig. 6A illustrates a block diagram of a user privacy information leakage pathway discovery and monitoring apparatus in accordance with an exemplary aspect of the present disclosure.
Fig. 6B illustrates a block diagram of a discovery and monitoring device of a user privacy information leakage pathway in accordance with an exemplary aspect of the present disclosure.
Detailed Description
In order to make the technical scheme and technical advantages thereof more clear, the technical scheme of the present disclosure is specifically described below with reference to specific embodiments and drawings. Those skilled in the art will appreciate that what is described is representative of the embodiments of the application and not all possible embodiments. Based on the embodiments in this disclosure, all other embodiments that may be made by one of ordinary skill in the art without the exercise of inventive faculty are intended to fall within the scope of this disclosure.
Fig. 1 illustrates a schematic diagram of a discovery and monitoring system architecture 100 of a user privacy information leakage pathway in accordance with an aspect of the present disclosure.
As shown in fig. 1, according to an example embodiment, the user privacy information leakage pathway discovery and monitoring system architecture 100 may include, for example, a development stage risk assessment module 102, a test stage discovery module 104, and a run stage discovery and monitoring module 106.
According to an exemplary embodiment, tables and fields for which user privacy data exists in the database are sampled from the database according to a pre-stage by semi-automated means, and internal interfaces for revealing user privacy data to other application invocations are precipitated by means such as SAST code analysis and post-stage data precipitation. The risk assessment module 102 may derive whether the application contains user privacy data by comparing with the data deposited as described above during the development stage, and perform risk assessment on the application to determine, for example, the extent of human SDL investment and provide data support at the time of risk investigation later.
According to an example embodiment, test phase discovery module 104 may determine which application responses via which application interfaces/pages contain user privacy information in a test environment by deploying an IAST and deploy DAST to security detect these application interfaces/pages.
Static Application Security Testing (SAST), interactive Application Security Testing (IAST), dynamic Application Security Testing (DAST), and the like are currently commonly used application security testing techniques. Static Application Security Test (SAST) refers to a technique for checking the correctness of a program by analyzing or checking only the syntax, structure, procedure, interface, etc. of a source program without running the program itself under test.
The Interactive Application Security Test (IAST) is a technology for collecting and monitoring function execution and data transmission when a Web application program runs at a service end, and performing real-time interaction with a scanner end to efficiently and accurately identify security defects and vulnerabilities and accurately determine code files, line numbers, functions and parameters of the vulnerabilities.
Dynamic Application Security Test (DAST) techniques analyze the dynamic operating state of an application during the test or run phase. It simulates hacking to dynamically attack the application and analyzes the application's response to determine if the Web application is vulnerable.
According to an exemplary embodiment, whether an application has an operational database table and a database field can be known through application risk information marked by a development stage risk assessment module and a database log after online, whether the early development stage risk assessment is missed is judged, verification and data supplementation are further performed, and/or the condition that each application/interface outputs user privacy data can be monitored through flow detection so as to be used for offline analysis of whether abnormal output conditions exist or not and tracing after a data leakage event occurs.
According to an exemplary embodiment, the discovery and monitoring system architecture 100 of the user privacy information leakage pathway may further include a vulnerability management repair module (not shown), such that the information of the security vulnerabilities discovered and monitored by the development stage risk assessment module 102, the test stage discovery module 104, and the run stage discovery and monitoring module 106 may be provided to the vulnerability management repair module to repair and manage the corresponding vulnerabilities. Namely, the scheme of the disclosure finds out and monitors the leakage path of the private data of the user by searching the leakage point of the private information and the storage point of the private information first and then tracing and tracking.
Fig. 2 illustrates a schematic diagram of a discovery and monitoring system 200 of user privacy information leakage paths in accordance with an aspect of the disclosure. The discovery and monitoring system 200 of user privacy information leakage pathway may be, for example, one exemplary implementation of the discovery and monitoring system architecture 100 of the user privacy information leakage pathway of fig. 1.
As shown, the user privacy information leakage path discovery and monitoring system 200 may include a marking module 201, a security review module 202, a security test module 204, an IAST module 206, a DAST module 208, a database log monitoring module 210, and a traffic monitoring module 212, among others.
According to an exemplary implementation, the marking module 201 may be coupled to one or more databases and mark sensitive information of these databases in a pattern such as automated sampling. The sensitive information may include, for example, a sensitive table, a sensitive field, and the like. The marking may be periodic or event triggered. For example, in the example case of periodic marking, marking module 201 may periodically mark sensitive information of the incremental database table and/or the modified database table. As another example, in the example case of event-triggered marking, marking module 201 may mark sensitive information of the incremental database table and/or the modified database table upon the occurrence of a predefined event (e.g., database modified, database table with new addition, etc.). The marked sensitive information may be stored in a marked sensitive information list. The marking module 201 may communicate the marked sensitive information (e.g., list) to the security review module 202.
The sensitive information may include information related to privacy of the user. In one example, for example, the sensitive information in the bank database may include, but is not limited to, the number of the user identity document, user account information, and the like. In another example, the sensitive information in the student information database may include, for example, a student status number, an associated bank account, a number of a parent identity document, and the like. In yet another example, the sensitive information in the shopping platform user database may include, for example, a user associated bank card, user real name authentication information, and the like. Where some features are not apparent, resulting in difficulty or inefficiency in automated marking, sensitive information marking may also include, at least in part, semi-manual or even fully manual intervention.
According to an exemplary implementation, the marking module 201 may also be coupled to a code library to scan the code library in a white-box fashion based on the marked sensitive information to determine which application interfaces have manipulated the sensitive information (e.g., read and/or write sensitive tables, sensitive fields in the database, access and/or output the sensitive information, operate, manipulate, etc.) and which call internal interfaces to retrieve and/or manipulate the sensitive information. By scanning the code library, the marking module 201 marks application information related to the user's private data, such as which application interfaces have operated on sensitive information of the database, which call internal interfaces have acquired and/or operated on sensitive information, etc. For example, the security review module 202 may also determine interface information or the like that output user privacy data. This may include, but is not limited to, the following exemplary cases: 1. the application/interface is fully user-oriented; or 2, the application/interface is invoked by other applications. For such applications where sensitive information is manipulated, the marking module 201 may mark it. The marked application information may be stored in a marked application information list. The marking module 201 may communicate the marked application information (e.g., list) to the security review module 202.
According to a further alternative embodiment, the marking of sensitive information and application information by the marking module 201 may be implemented based on machine learning. The machine learning scheme may include modes such as supervised learning, semi-supervised learning, or unsupervised learning.
According to an exemplary embodiment, the security review module 202 may make a preliminary determination on the whole project and some places where risks may exist according to the required documents of the project and the system analysis documents based on the obtained marked sensitive information and marked application information, so as to quickly and efficiently preliminarily check the leakage path of the privacy information of the user. For example, the security review module 202 may make security suggestions regarding the architecture and implementation of the application, and the like. For example, the security review module may determine from previously deposited marked tables, fields, interface information, etc. whether the application does need to call those tables, fields, interfaces, etc. to risk score the application. If the risk is not high, and in the case of insufficient human resources of a security team, the automatic detection can be relied on mainly, so that the labor input to the application program is reduced; on the other hand, if the risk is high, not only automatic security detection but also manual access to perform measures such as manual testing are required.
The security test module 204 may focus on and security check the marked applications related to the sensitive information, applications/interfaces that invoked the sensitive information, and the like. For example, the test may be a manual test, a semi-manual test, or an automatic test (e.g., SAST, etc.) to discover user privacy information leakage paths at an early stage and repair or modify them. For example, security checks made by security test module 204 may be made after security review by security review module 202.
Through testing, the user privacy information leakage path can be found at an early stage and repaired or modified to improve the safety of the application program.
Returning to FIG. 2, IAST module 206 may be used to apply the test phase. For example, when an application is published to a test environment for testing, IAST module 206 may determine, when the application responds to the information to the user, whether the response contains user privacy information. If so, the corresponding request information is synchronized to DAST module 208 for vulnerability detection of the interface for unauthorized vulnerabilities, unauthorized access, SQL injection vulnerabilities, and the like.
Fig. 3 illustrates a schematic diagram of the operation of an IAST module according to an exemplary aspect of the present disclosure. The IAST module can be deployed in an application program in a mode of code injection and the like to collect and monitor function execution and data transmission when the application program runs, and interact with a scanner end in real time, so that security defects and vulnerabilities can be identified efficiently and accurately, and meanwhile, code files, line numbers, functions and parameters where the vulnerabilities are located can be determined accurately.
IAST can include inserting the probe in the specific position under the condition of guaranteeing the original logic of the target program to be complete, when the application program runs, acquiring the request, the code data stream, the code control stream and the like through the probe, and judging the vulnerability based on comprehensive analysis of the request, the code, the data stream and the control stream.
According to an exemplary embodiment, IAST code may be injected into and run with application code running in the server under test. Thus, the IAST can acquire the information of the application program, acquire the flow accessing the application program, synchronize the acquired flow information to the DAST module, and initiate a scanning test by the DAST module. The IAST module tracks the response additional tests, coverage and context of the tested application program during scanning, sends related information to the management server, and provides a security test result by the management server.
In this way, the IAST module can determine whether the response contains user privacy/sensitive information when the application responds to the information to the user. If so, the corresponding information is synchronized to a DAST module (e.g., DAST module 208 in FIG. 2).
Returning to fig. 2, dast module 208 may perform SQL injection, detection of unauthorized vulnerabilities, etc. on the corresponding application interfaces/pages based on synchronization information received from IAST module 206 that relates to whether the application response contains user privacy/sensitive information or not, to reduce the risk of vulnerabilities causing leakage of user privacy information.
Fig. 4 illustrates a schematic diagram of the operation of a DAST module according to an exemplary aspect of the present disclosure. The DAST module may discover the entire application structure through crawlers and/or online user generated mirrored traffic, etc., e.g., crawlers may discover how many directories, how many pages, which parameters are in the pages, etc. of the application under test. Based on the analysis of the crawlers, the DAST module may send modified HTTP requests to the discovered pages and parameters to make attack attempts and verify whether security vulnerabilities involving user privacy/sensitive information exist through analysis of the responses. The modified HTTP request may include, for example, SQL injection, unauthorized vulnerability, detection of unauthorized vulnerabilities, and the like.
The following exemplary cases are given as examples to facilitate a better understanding of aspects of the present disclosure by those skilled in the art. One function of an application is to query for user information. Incoming is a user ID, SQL statement is: select from user where uid =input_uid. IAST recognizes that security risks are likely to exist according to the fact that the response packet returned to the user contains user privacy information, and synchronizes the information to DAST.
The DAST module adopts a black box mode to detect SQL injection and the like for the interface (page). SQL injection generally involves the insertion of SQL commands into a query string of a Web form submitting or entering a domain name or page request, ultimately resulting in the spoofed server executing malicious SQL commands
According to an exemplary scenario, when the DAST module performs SQL injection detection on the interface, it may be found that if an attacker inputs 1and 1=1 in a text box, the SQL statement becomes select from user where uid =1and 1=1, user information with a uid of 1 is normally returned after execution, and if 1and 1=2 is input in the text box, the SQL statement becomes select from user where uid =1and 1=2, and no user information is returned after execution, and it is determined that there is a high probability of SQL injection vulnerability. The DAST module can find a user privacy data leakage path through performing the SQL injection, so that the system can perform targeted patching to reduce the risk of user privacy/sensitive information leakage.
The override vulnerability can be due to negligence of a background developer, and user judgment is not performed when the information is subjected to the addition, deletion and verification, so that an attacker can perform operations such as addition, deletion, verification and the like on other users.
The override vulnerabilities can be categorized into horizontal overrides and vertical overrides. Horizontal override may generally refer to an access control attack vulnerability. For example, when a piece of data is added, deleted, modified and checked, the application program does not judge the user corresponding to the data, or when the user of the data is judged, the application program is realized by acquiring userid from the user form parameters. Thus, an attacker can modify userid to achieve horizontal override. The unauthorized loopholes can be generally found in places such as adding, deleting, modifying, checking, logging in, updating and the like.
On the other hand, vertical override is also called a right lifting attack. For example, the application program does not control the user authority or only performs authority control on the menu, so that an attacker can access or control data or pages owned by other roles as long as the attacker guesses the URLs of other management pages, and the aim of improving the authority is achieved.
Taking another example scenario as an example, a website uses the UID as a token for subsequent identification of a user to view a response after logging in. In the test stage, the IAST module can collect and monitor function execution and data transmission when the application program runs and perform real-time interaction with the scanner end, efficiently and accurately identify that sensitive information is contained in the response, the verification mode possibly has security defects and loopholes, and accurately determine code files, functions and parameters where the loopholes are located. The IAST module synchronizes this information to the DAST module. The DAST module adopts a black box mode to detect the unauthorized vulnerability of the interface/page related to the response containing sensitive information, so as to prevent the leakage of user information caused by unauthorized problem. According to an exemplary scenario, when the DAST module performs unauthorized vulnerability detection on the interface, it may be found that the returned packet that the application program will provide after modifying the user phone number on the interface includes the UID, so that an attacker may directly perform unauthorized access using the UID. The DAST module can find a user privacy data leakage path by performing such unauthorized vulnerability detection, so that the system can perform targeted repair to reduce the risk of user privacy/sensitive information leakage.
Unauthorized access includes defects such as addresses, authorized pages and the like requiring security configuration or authority authentication, so that other users can directly access the access, and sensitive information such as important authorities can be revealed by operation, databases, website catalogs and the like.
For example, according to one exemplary scenario, on a website, an attacker may access internal data without authentication due to improper configuration.
In the test stage, the IAST module can collect and monitor function execution and data transmission when the application program runs, and interact with the scanner end in real time, so that the possible security defects and vulnerabilities of the website can be identified efficiently and accurately, and code files, functions and parameters where the vulnerabilities are located are determined accurately.
The IAST module synchronizes this information to the DAST module. The DAST module adopts a black box mode to detect unauthorized loopholes on the website. According to an exemplary scenario, when the DAST module detects an unauthorized vulnerability of the interface, it may be found that an unauthorized vulnerability exists on this website, and such vulnerability may result in sensitive information leakage (e.g., session, cookie, or business data may be keyed by get enumeration). The DAST module can find a user privacy data leakage path by carrying out unauthorized vulnerability detection, so that the system can carry out targeted patching to reduce the risk of user privacy/sensitive information leakage.
The case where the scheme of the present disclosure monitors a new sensitive information leakage point is described below in connection with an example. When the IAST module finds that a certain interface/application which is considered to not operate the user privacy information in the review stage returns the user privacy information, the IAST module synchronizes the information to the DAST module. The DAST module checks whether the situation is misjudged or just misreported when the front-end data and the interface marking miss part of interfaces to cause the security review. If the data is missing, the called tables/fields/interfaces containing the user privacy data are correspondingly supplemented back into the data, and the full amount of applications are scanned with the new data to determine if other applications have the calls to the tables, fields, interfaces, thereby finding and repairing more sensitive information leakage paths.
Returning to fig. 2, after the application is published online, DAST module 208, database log monitoring module 210, and traffic monitoring module 212 may discover and monitor the running application for user privacy information leakage paths.
According to an exemplary embodiment, database log monitoring module 210 may determine that a missed application invokes tables and fields, etc., relating to user privacy data by monitoring database logs in combination with previously tagged sensitive information containing user privacy data (e.g., by security review module 202) and application information relating to user privacy data.
For example, in an exemplary scenario, database log monitoring module 210 may find all applications that invoke user privacy data by monitoring an oplog that contains a library, table, field of tagged user privacy/sensitive information. The database log monitoring module 210 examines these applications that invoke user privacy data to determine if they have all been marked. If not, the application information is determined to be a miss mark.
After determining that the application information is missed, it may be determined based on the application information, accordingly, which data table(s) or field(s) was missed when the database was marked. Based on the determined missed data table or field, etc., a full amount of application code may be scanned, e.g., by SAST, to analyze which applications have operated on the data table(s) or field(s), and which interfaces have called the data table(s) or field(s), etc., and/or to provide such information for other applications to call after secondary processing. Thus, more leaky-marked applications and interfaces can be found. According to an example, the missed application information may be added to the marked application information list.
As another example, the database log monitoring module 210 may discover that there is an unreasonable application design or hacking by monitoring the database log to discover that a single query returns a large amount of user privacy/sensitive information. The database log monitoring module 210 may record and alert this.
On the other hand, the flow monitoring module 212 can monitor the condition of outputting user privacy data by each application and each interface in a flow detection mode, and store the flow containing sensitive information in each return packet, and the partially stored flow can be used for offline analysis data leakage tracing.
According to an exemplary embodiment, alert rules at the traffic level may be set. For example, one or more of the following exemplary, but non-limiting, rules, or any combination thereof, may be provided:
a. the condition that a single request returns a plurality of pieces of user privacy data can be alarmed so as to monitor the leakage of the trawling library and SQL injection and the leakage of a large amount of user privacy data caused by business logic;
b. monitoring the leakage amount of user privacy data of each interface every day, calculating the threshold range of the normal calling condition in a statistical or machine learning mode, and alarming when the threshold is exceeded;
c. analyzing access conditions of each user and each IP to user privacy data from the dimension of a single user (provided that the flow contains user information) and a single IP, and alarming abnormal access times, wherein the rule mainly aims at data leakage of internal staff; and/or
d. And judging whether the access is through a crawler or other automation tools, and alarming the access condition of the automation tools.
By setting such alert rules, the system may alert and record (e.g., store in memory) relevant information when the flow monitoring module 212 monitors regular flow.
Further, the traffic monitoring module 212 may synchronize information to the DAST module 208, and the DAST module 208 performs SQL injection, unauthorized vulnerability and unauthorized vulnerability detection on the interface (page) to reduce risk of disclosure of private information of the user caused by the vulnerability.
In an application scenario, various steps, modules, pages, interfaces, etc. may have leakage paths that are caused, at least in part, by rules and product capability bottlenecks.
The scheme of the disclosure can discover and monitor the leakage path of the user privacy information from each link of the SDL, each layer of database log, flow analysis and the like, and can discover various risks such as unauthorized loopholes, unauthorized loopholes and the like, and drag library behaviors and the like. After data is deposited through database log monitoring and/or flow monitoring, the system can analyze the risk level of each application and the importance level of each application on the security level.
In the scheme of the present disclosure, a case where a new sensitive information leakage point is monitored after an application program is online is described below with reference to an example.
When the interface/application which is considered to not operate the user privacy information in the review/test stage is found to return the user privacy information based on the flow monitoring or DAST return packet, whether the prior data and the interface marking miss part of interfaces to cause misjudgment or only misinformation in the safety review/test is checked. If this is due to a data miss, the called tables/fields/interfaces containing the user privacy data are correspondingly supplemented back into the data and the full volume of applications are scanned with the new data to see if other applications have called these tables/fields/interfaces to find more sensitive information leakage paths.
On the other hand, when it is found from the database log that an application that should not manipulate the user privacy data manipulates a table/field containing the user privacy data, an investigation will be made to determine if the security review/test was missed or because the application was hacked to drag the user privacy data to the database using the application as a springboard. Accordingly, the system may take security measures to remedy the vulnerability and eliminate the security hidden trouble.
Fig. 5A illustrates a flow chart of a method 500A for discovery and monitoring of user privacy information leakage paths in accordance with an exemplary aspect of the present disclosure.
Method 500A may include, at block 502, the supplemental application invoking an interface, table, field information that obtains user-related information.
At block 504, method 500A may include deriving a risk score for the application based on the supplemented information in combination with the marking information for the pre-precipitation.
At block 506, the method 500A may include determining whether the application invoked user privacy information. If yes, flow is to block 508; if not, flow is to block 510.
At block 508, the method 500A may include analyzing the output points of the user privacy and marking, for example, by SAST.
At block 510, method 500A may include using an interactive application security test to determine whether the response of the application includes marked sensitive information.
At block 512, method 500A may include performing a black box test on the application to determine if there is a risk of causing leakage of the sensitive information based on including the marked sensitive information in the response of the application.
According to an exemplary embodiment, blocks 502-504 may relate to a risk review phase of an application, while blocks 506-512 may pertain to an application testing phase.
After block 512, flow may go to fig. 5B. Fig. 5B illustrates a flow chart of a method 500B for discovery and monitoring of user privacy information leakage paths in accordance with an exemplary aspect of the present disclosure. Fig. 5B may be a continuation of fig. 5A.
According to an example embodiment, the method 500B of fig. 5B may involve a stage after an application is online.
At block 513, method 500B may include publishing and running the application.
At block 514, method 500B may include sampling and marking the database, table, field data newly added to the application.
At block 516, method 500B may include monitoring a log of the one or more databases to determine if any application information related to the marked sensitive information is unmarked and marking it.
At block 518, method 500B may include traffic detection for the application to monitor the output of sensitive information by each interface/page of the application and binning the traffic of the associated portion.
At block 520, the method 500B may include performing a black box test on the application to determine if there is a risk of causing sensitive information leakage based on the traffic detection. For example, when the traffic detection monitors that sensitive information is output by a certain interface/page of the application program, a black box detection may be performed with respect to the interface/page of the application program to determine whether there is a risk of causing sensitive information leakage.
Fig. 6A illustrates a block diagram of a user privacy information leakage pathway discovery and monitoring device 600A in accordance with an exemplary aspect of the present disclosure.
The apparatus 600A may include a module 602 for supplementing an application call with interface, table, field information to obtain user-related information; a module 604 for deriving an applied risk score based on the supplemented information in combination with the pre-precipitated marking information; a module 606 for determining whether the application invoked user privacy information; if yes, a module 608 for analyzing the output points of the user privacy and marking by, for example, SAST; and if not, a module 610 for determining whether the response of the application includes the marked sensitive information using an interactive application security test.
The apparatus 600A may also include a module 612 for black box detection of the application to determine if there is a risk of causing leakage of the sensitive information based on including the marked sensitive information in the response of the application.
According to an exemplary embodiment, blocks 602-604 may relate to risk reviews of applications, while blocks 606-612 may relate to application testing.
Fig. 6B illustrates a block diagram of a user privacy information leakage pathway discovery and monitoring device 600B in accordance with an exemplary aspect of the present disclosure. The apparatus of fig. 6B may be combined with the apparatus of fig. 6A to form a single apparatus or may be implemented separately therefrom.
According to an example embodiment, the device 600B of fig. 6B may involve a stage after an application is online.
The apparatus 500B may include a module 613 for publishing and running an application; a module 614 for sampling and marking the database, table, field data newly added by the application; a module 616 for monitoring the log of the one or more databases to determine if any application information related to the marked sensitive information is unmarked and marking it.
The apparatus 600B may further include a module 618 for traffic detection for the application to monitor the output of sensitive information by each interface/page of the application and to binn traffic for the associated portion; and a module 620 for performing a black box test on the application to determine if there is a risk of causing sensitive information leakage based on the traffic detection. For example, when the traffic detection monitors that sensitive information is output by a certain interface/page of the application program, a black box detection may be performed with respect to the interface/page of the application program to determine whether there is a risk of causing sensitive information leakage.
The scheme of the present disclosure links and combines the capabilities of various security products and means, monitors layer by layer, and covers substantially all risk ports by means of deep defenses, thereby improving the overall water level of data security.
The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable Logic Device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the disclosure may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may reside in any form of storage medium known in the art. Some examples of storage media that may be used include Random Access Memory (RAM), read Only Memory (ROM), flash memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, and so forth. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. A storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
The methods disclosed herein comprise one or more steps or actions for achieving the described method. These method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
The processor may execute software stored on a machine-readable medium. A processor may be implemented with one or more general-purpose and/or special-purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry capable of executing software. Software should be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. By way of example, a machine-readable medium may comprise RAM (random access memory), flash memory, ROM (read only memory), PROM (programmable read only memory), EPROM (erasable programmable read only memory), EEPROM (electrically erasable programmable read only memory), registers, a magnetic disk, an optical disk, a hard drive, or any other suitable storage medium, or any combination thereof. The machine-readable medium may be implemented in a computer program product. The computer program product may comprise packaging material.
In a hardware implementation, the machine-readable medium may be part of a processing system that is separate from the processor. However, as will be readily appreciated by those skilled in the art, the machine-readable medium, or any portion thereof, may be external to the processing system. By way of example, machine-readable media may comprise a transmission line, a carrier wave modulated by data, and/or a computer product separate from the wireless node, all of which may be accessed by the processor through a bus interface. Alternatively or additionally, the machine-readable medium, or any portion thereof, may be integrated into the processor, such as the cache and/or general purpose register file, as may be the case.
The processing system may be configured as a general-purpose processing system having one or more microprocessors that provide processor functionality, and external memory that provides at least a portion of a machine-readable medium, all linked together with other supporting circuitry by an external bus architecture. Alternatively, the processing system may be implemented with an ASIC (application specific integrated circuit) with a processor, a bus interface, a user interface (in the case of an access terminal), supporting circuitry, and at least a portion of a machine-readable medium, integrated in a single chip, or with one or more FPGAs (field programmable gate arrays), PLDs (programmable logic devices), controllers, state machines, gating logic, discrete hardware components, or any other suitable circuitry, or any combination of circuits capable of executing the various functionalities described throughout this disclosure. Those skilled in the art will recognize how conveniently the functionality described with respect to the processing system is implemented, depending on the particular application and the overall design constraints imposed on the overall system.
A machine-readable medium may include several software modules. These software modules include instructions that, when executed by a device, such as a processor, cause a processing system to perform various functions. These software modules may include a transmit module and a receive module. Each software module may reside in a single storage device or be distributed across multiple storage devices. As an example, when a trigger event occurs, the software module may be loaded into RAM from a hard drive. During execution of the software module, the processor may load some instructions into the cache to increase access speed. One or more cache lines may then be loaded into a general purpose register file for execution by the processor. Where functionality of a software module is described below, it will be understood that such functionality is implemented by a processor when executing instructions from the software module.
If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a web site, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as Infrared (IR), radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk (disc) and disc (disc), as used herein, includes Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy disk, and disk The disc is provided with a plurality of grooves,wherein the disc (disk) often magnetically reproduces data and the disc (disk) optically reproduces data with a laser. Thus, in some aspects, a computer-readable medium may comprise a non-transitory computer-readable medium (e.g., a tangible medium). In addition, for other aspects, the computer-readable medium may include a transitory computer-readable medium (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media.
Accordingly, certain aspects may include a computer program product for performing the operations presented herein. For example, such a computer program product may include a computer-readable medium having instructions stored (and/or encoded) thereon that are executable by one or more processors to perform the operations described herein. In certain aspects, the computer program product may comprise packaging material.
It is to be understood that the claims are not limited to the precise configurations and components illustrated above. Various modifications, substitutions and alterations can be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims.

Claims (26)

1. A method of data security involving an application, the application comprising a code base and being associated with a database, comprising:
Marking sensitive information in the database;
scanning the code library based on the marked sensitive information to determine and mark application information related to the marked sensitive information;
based on the marked application information, conducting investigation of sensitive information leakage paths;
using a security test to determine if the application has a response that is not determined by the troubleshooting to be a sensitive information leakage pathway, but contains marked sensitive information; and
based on the response not being determined by the investigation as a sensitive information leakage path but containing marked sensitive information, the application is tested for further sensitive information leakage paths.
2. The method of claim 1, further comprising:
publishing and running the application; and
at least one of the following is performed to further discover and monitor sensitive information leakage paths:
monitoring the log of the database to determine whether unlabeled application information relates to the marked sensitive information, and supplementing and analyzing the unlabeled application information; and
and detecting the flow of the application program to monitor the output of each interface and page of the application program to the marked sensitive information, and warehousing the flow associated with the output of the sensitive information.
3. The method of claim 2, wherein analyzing the unlabeled application information includes determining whether the unlabeled application information is improperly accessed or manipulated by hacking.
4. The method of claim 2, wherein analyzing the unlabeled application information includes determining whether the application is unlabeled due to sensitive information in the database.
5. The method of claim 4, wherein if there is a missing mark of sensitive information in the database, then performing a full code scan of the code library for missing mark sensitive information to find further missing marks of sensitive information or sensitive information leakage paths.
6. The method of claim 2, further comprising:
and carrying out black box detection on the flow rate associated with the output of the sensitive information to determine whether the risk of causing the sensitive information to leak exists.
7. The method of claim 1, wherein,
marking the sensitive information in the database further includes storing the marked sensitive information in a marked sensitive information list; and marking sensitive information of the incremental database table and/or the modified database table.
8. The method of claim 1, wherein,
marking application information related to the marked sensitive information includes storing the application information in a marked application information list, and
marking the unmarked application information related to the marked sensitive information includes adding the unmarked application information related to the marked sensitive information to the marked application information list.
9. The method of claim 1, wherein the sensitive information comprises a sensitive table and a sensitive field of the database.
10. The method of claim 1, wherein the application information comprises pages and interfaces of the application program.
11. The method of claim 2, further comprising performing offline analysis and data leakage tracing of traffic associated with the output of sensitive information in the warehouse.
12. The method of claim 2, wherein monitoring the log of the database further comprises:
determining an interface or page that repeatedly or largely queries the database and returns the marked sensitive information; and
and recording the information of the interface or the page and alarming.
13. The method of claim 2, wherein the detecting traffic for the application further comprises:
Monitoring one or more of the following or any combination thereof in the application:
a. a single request returns a plurality of pieces of user privacy data;
b. the leakage amount of the user privacy data of each interface per day exceeds a threshold value;
c. the access times of each IP of each user to the private data of the user are abnormal; and
d. whether access is through a crawler or other automated tool; and is also provided with
When any of the above conditions is monitored, relevant information is recorded and an alarm is given.
14. An apparatus relating to data security of an application, the application comprising a code base and being associated with a database, the apparatus comprising:
a module for marking sensitive information in the database;
means for scanning the code library based on the marked sensitive information to determine and mark application information related to the marked sensitive information;
means for conducting an investigation of sensitive information leakage paths based on the marked application information;
a module for security testing to determine if the application has a response that is not determined by the screening to be a sensitive information leakage path, but contains marked sensitive information; and
And means for detecting the application to find a further sensitive information leakage path based on the response not being determined by the investigation as a sensitive information leakage path but containing marked sensitive information.
15. The apparatus of claim 14, further comprising:
a module for publishing and running the application; and
means for performing at least one of the following to further discover and monitor sensitive information leakage paths:
monitoring the log of the database to determine whether unlabeled application information relates to the marked sensitive information, and supplementing and analyzing the unlabeled application information; and
and detecting the flow of the application program to monitor the output of each interface and page of the application program to the marked sensitive information, and warehousing the flow associated with the output of the sensitive information.
16. The apparatus of claim 15, wherein analyzing the unlabeled application information includes determining whether the unlabeled application information is improperly accessed or manipulated by hacking.
17. The apparatus of claim 15, wherein analyzing the unlabeled application information includes determining whether the application is unlabeled due to sensitive information in the database.
18. The apparatus of claim 17, further comprising means for performing a full code scan of the code library for missed sensitive information to find further sensitive information missed marks or sensitive information leakage paths if there are sensitive information missed marks in the database.
19. The apparatus of claim 15, further comprising:
and means for black box detecting the traffic associated with the output of the sensitive information to determine if there is a risk of causing leakage of the sensitive information.
20. The apparatus of claim 14, wherein:
the means for marking sensitive information in a database associated with the application further comprises means for storing the marked sensitive information in a marked sensitive information list; and means for marking sensitive information of the incremental database table and/or the modified database table.
21. The apparatus of claim 14, wherein,
the means for marking application information related to the marked sensitive information comprises means for storing the application information in a marked application information list, and
the means for marking the unmarked application information related to the marked sensitive information comprises means for adding the unmarked application information related to the marked sensitive information to the marked application information list.
22. The apparatus of claim 14, wherein the sensitive information comprises a sensitive table and a sensitive field of the database.
23. The apparatus of claim 14, wherein the application information comprises pages and interfaces of the application program.
24. The apparatus of claim 15, further comprising means for offline analysis and data leakage tracing of traffic associated with the output of sensitive information in the warehouse.
25. The apparatus of claim 15, wherein means for monitoring the log of the database further comprises:
means for determining an interface or page that repeatedly or largely queries the database and returns the marked sensitive information; and
and the module is used for recording the information of the interface or the page and alarming.
26. The apparatus of claim 15, wherein means for traffic detection for the application further comprises:
a module for monitoring one or more of the following or any combination thereof in the application:
a. a single request returns a plurality of pieces of user privacy data;
b. the leakage amount of the user privacy data of each interface per day exceeds a threshold value;
c. the access times of each IP of each user to the private data of the user are abnormal; and
d. Whether access is through a crawler or other automated tool; and
and a module for recording the related information and giving an alarm when any of the above conditions is monitored.
CN202010440451.2A 2020-05-22 2020-05-22 Method and device for data security related to application program Active CN111611590B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010440451.2A CN111611590B (en) 2020-05-22 2020-05-22 Method and device for data security related to application program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010440451.2A CN111611590B (en) 2020-05-22 2020-05-22 Method and device for data security related to application program

Publications (2)

Publication Number Publication Date
CN111611590A CN111611590A (en) 2020-09-01
CN111611590B true CN111611590B (en) 2023-10-27

Family

ID=72203757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010440451.2A Active CN111611590B (en) 2020-05-22 2020-05-22 Method and device for data security related to application program

Country Status (1)

Country Link
CN (1) CN111611590B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113032255B (en) * 2021-03-19 2024-03-01 广州虎牙科技有限公司 Response noise identification method, model, electronic device and computer storage medium
CN113010898B (en) * 2021-03-25 2024-04-26 腾讯科技(深圳)有限公司 Application program security testing method and related device
CN115203060B (en) * 2022-09-14 2022-12-13 深圳开源互联网安全技术有限公司 IAST-based security testing method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104462988A (en) * 2014-12-16 2015-03-25 国家电网公司 Walk-through test technique based information security audit implementation method and system
CN104484607A (en) * 2014-12-16 2015-04-01 上海交通大学 Universal method and universal system for performing safety testing on Android application programs
CN106845236A (en) * 2017-01-18 2017-06-13 东南大学 A kind of application program various dimensions privacy leakage detection method and system for iOS platforms
CN107885995A (en) * 2017-10-09 2018-04-06 阿里巴巴集团控股有限公司 The security sweep method, apparatus and electronic equipment of small routine
CN109614814A (en) * 2018-10-31 2019-04-12 平安普惠企业管理有限公司 The method, apparatus and computer equipment of the sensitive log of scanning based on log monitoring
CN109639884A (en) * 2018-11-21 2019-04-16 惠州Tcl移动通信有限公司 A kind of method, storage medium and terminal device based on Android monitoring sensitive permission
EP3495978A1 (en) * 2017-12-07 2019-06-12 Virtual Forge GmbH Method for detecting vulnerabilities in software
CN110598411A (en) * 2019-09-23 2019-12-20 腾讯科技(深圳)有限公司 Sensitive information detection method and device, storage medium and computer equipment
CN110839012A (en) * 2019-09-25 2020-02-25 国网思极检测技术(北京)有限公司 Troubleshooting method for preventing sensitive information from being leaked
CN111008376A (en) * 2019-12-09 2020-04-14 国网山东省电力公司电力科学研究院 Mobile application source code safety audit system based on code dynamic analysis

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2777434C (en) * 2012-05-18 2019-09-10 Ibm Canada Limited - Ibm Canada Limitee Verifying application security vulnerabilities
US9158922B2 (en) * 2013-05-29 2015-10-13 Lucent Sky Corporation Method, system, and computer-readable medium for automatically mitigating vulnerabilities in source code

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104462988A (en) * 2014-12-16 2015-03-25 国家电网公司 Walk-through test technique based information security audit implementation method and system
CN104484607A (en) * 2014-12-16 2015-04-01 上海交通大学 Universal method and universal system for performing safety testing on Android application programs
CN106845236A (en) * 2017-01-18 2017-06-13 东南大学 A kind of application program various dimensions privacy leakage detection method and system for iOS platforms
CN107885995A (en) * 2017-10-09 2018-04-06 阿里巴巴集团控股有限公司 The security sweep method, apparatus and electronic equipment of small routine
EP3495978A1 (en) * 2017-12-07 2019-06-12 Virtual Forge GmbH Method for detecting vulnerabilities in software
CN109614814A (en) * 2018-10-31 2019-04-12 平安普惠企业管理有限公司 The method, apparatus and computer equipment of the sensitive log of scanning based on log monitoring
CN109639884A (en) * 2018-11-21 2019-04-16 惠州Tcl移动通信有限公司 A kind of method, storage medium and terminal device based on Android monitoring sensitive permission
CN110598411A (en) * 2019-09-23 2019-12-20 腾讯科技(深圳)有限公司 Sensitive information detection method and device, storage medium and computer equipment
CN110839012A (en) * 2019-09-25 2020-02-25 国网思极检测技术(北京)有限公司 Troubleshooting method for preventing sensitive information from being leaked
CN111008376A (en) * 2019-12-09 2020-04-14 国网山东省电力公司电力科学研究院 Mobile application source code safety audit system based on code dynamic analysis

Also Published As

Publication number Publication date
CN111611590A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN111611590B (en) Method and device for data security related to application program
CN104767757B (en) Various dimensions safety monitoring method and system based on WEB service
Fonseca et al. Testing and comparing web vulnerability scanning tools for SQL injection and XSS attacks
CN110266669A (en) A kind of Java Web frame loophole attacks the method and system of general detection and positioning
Elia et al. Comparing SQL injection detection tools using attack injection: An experimental study
CN112738126A (en) Attack tracing method based on threat intelligence and ATT & CK
CN103428196A (en) URL white list-based WEB application intrusion detecting method and apparatus
Djuric A black-box testing tool for detecting SQL injection vulnerabilities
Yeole et al. Analysis of different technique for detection of SQL injection
CN111488590A (en) SQ L injection detection method based on user behavior credibility analysis
Dalai et al. Neutralizing SQL injection attack using server side code modification in web applications
CN114386032A (en) Firmware detection system and method for power Internet of things equipment
CN113158197B (en) SQL injection vulnerability detection method and system based on active IAST
CN113987504A (en) Vulnerability detection method for network asset management
CN111625821A (en) Application attack detection system based on cloud platform
CN113596114A (en) Extensible automatic Web vulnerability scanning system and method
KR100670209B1 (en) Device of analyzing web application source code based on parameter status tracing and method thereof
Yan et al. Detection method of the second-order SQL injection in Web applications
CN107231364A (en) A kind of website vulnerability detection method and device, computer installation and storage medium
Long et al. An efficient algorithm and tool for detecting dangerous website vulnerabilities
KR101464736B1 (en) Security Assurance Management System and Web Page Monitoring Method
CN116932381A (en) Automatic evaluation method for security risk of applet and related equipment
Zhang et al. Research on SQL injection vulnerabilities and its detection methods
Zhang et al. An automated composite scanning tool with multiple vulnerabilities
CN106446694A (en) Xss vulnerability mining system based on network crawlers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant