CN113254938A - Method, device and medium for processing automatic safety test result - Google Patents

Method, device and medium for processing automatic safety test result Download PDF

Info

Publication number
CN113254938A
CN113254938A CN202110339438.2A CN202110339438A CN113254938A CN 113254938 A CN113254938 A CN 113254938A CN 202110339438 A CN202110339438 A CN 202110339438A CN 113254938 A CN113254938 A CN 113254938A
Authority
CN
China
Prior art keywords
vulnerability
processing
data
vulnerability data
false
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110339438.2A
Other languages
Chinese (zh)
Inventor
张咏梅
赵泽栋
林卫华
李燕
陈国龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Huaxing Bank Co ltd
Original Assignee
Guangdong Huaxing Bank Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Huaxing Bank Co ltd filed Critical Guangdong Huaxing Bank Co ltd
Priority to CN202110339438.2A priority Critical patent/CN113254938A/en
Publication of CN113254938A publication Critical patent/CN113254938A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
    • G06F21/577Assessing vulnerabilities and evaluating computer system security
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24564Applying rules; Deductive queries

Abstract

The processing method of the automatic safety test result comprises the steps of collecting a test result obtained after an automatic safety test tool carries out safety test on a target application system, screening out a corresponding false alarm filter in a database according to each piece of bug data, and marking the bug data which accords with a preset false alarm bug rule as false alarm bug data by using the false alarm filter; carrying out false alarm marking processing on the vulnerability data except the first false alarm vulnerability data, taking the vulnerability data which is subjected to false alarm marking processing as false alarm vulnerability data, and carrying out repairing processing and verification processing on a target application system according to the vulnerability data which is subjected to false alarm marking processing as to-be-repaired vulnerability data; and calculating the false alarm rate of the test tool corresponding to the automatic safety test tool according to the total number of the false alarm vulnerability data and the total number of the vulnerability data in the test result. The processing method of the automatic safety test result improves the accuracy of safety evaluation on the tested target application system.

Description

Method, device and medium for processing automatic safety test result
Technical Field
The invention relates to the field of automatic safety test, in particular to a method, equipment and medium for processing an automatic safety test result.
Background
In the process of security testing of software applications, detection of application security vulnerabilities of application systems needs to be performed through an automated security testing tool to find out security vulnerabilities of the systems. Common automatic Application Security testing tools mainly include multiple tools such as SAST (static Application Security testing), DAST (dynamic Application Security testing), IAST (interactive Application Security testing), and the like, but the results of all the methods have certain false alarms, so that the automatic Security testing process needs to be manually judged for two times. Due to the interference of misjudgment, the reliability of the test report and the safety evaluation standard of each test tool is also reduced. The safety automation safety testing tool can output a testing result every time the safety automation safety testing tool completes testing, the testing result comprises data of a plurality of suspected vulnerability lists, and the standards for judging false alarm are different due to different characteristics of tested systems. The existing safety quality assessment method mainly aims at the number of loopholes obtained by testing of an automatic safety testing tool and the risk level of the loopholes, so that the safety of the system is assessed, the misjudgment condition of the automatic safety testing tool is not used as a parameter for assessing an application system, and the safety assessment of the tested application system is not accurate.
Disclosure of Invention
In order to overcome the defects of the prior art, an object of the present invention is to provide a method for processing an automated security test result, which can solve the problem that the security evaluation of an application system to be tested is not accurate because the security of the system is evaluated mainly according to the number of vulnerabilities obtained by an automated security test tool and the risk level of the vulnerabilities themselves and the misjudgment condition of the automated security test tool itself is not used as a parameter for evaluating the application system in the existing security quality evaluation method.
The second objective of the present invention is to provide an electronic device, which can solve the problem that the existing security quality assessment method mainly assesses the security of the system according to the number of vulnerabilities obtained by the automated security testing tool and the risk level of the vulnerabilities themselves, and the security assessment of the application system to be tested is not accurate because the misjudgment condition of the automated security testing tool itself is not used as the parameter for assessing the application system.
The invention also provides a computer-readable storage medium, which can solve the problem that the existing security quality assessment method mainly assesses the security of the system according to the number of vulnerabilities obtained by the test of the automatic security test tool and the risk level of the vulnerabilities, and the security assessment of the tested application system is not accurate because the misjudgment condition of the automatic security test tool is not taken as the parameter for assessing the application system.
One of the purposes of the invention is realized by adopting the following technical scheme:
the processing method of the automatic safety test result comprises the following steps:
collecting test results, collecting test results obtained after an automatic safety test tool carries out safety test on a target application system, and importing all the test results into a database, wherein the test results contain a plurality of pieces of vulnerability data;
false alarm filtering, namely screening out a corresponding false alarm filter in a database according to each piece of vulnerability data, using the false alarm filter to mark the vulnerability data which accords with a preset false alarm vulnerability rule as first false alarm vulnerability data, and using the first false alarm vulnerability data as false alarm vulnerability data;
the vulnerability management comprises the steps of carrying out false alarm marking processing on the vulnerability data except the first false alarm vulnerability data, using the vulnerability data which is subjected to false alarm marking processing to be false alarm as false alarm vulnerability data, and carrying out repairing processing and verification processing on a target application system according to the vulnerability data which is subjected to false alarm marking processing to be repaired;
and calculating the false alarm rate of the test tool, and calculating the false alarm rate of the test tool corresponding to the automatic safety test tool according to the total number of the false alarm vulnerability data and the total number of the vulnerability data in the test result.
Further, the vulnerability management comprises the following steps:
the vulnerability marking, namely sending the vulnerability data except the first misreported vulnerability data to a testing module for vulnerability marking by a tester, receiving a vulnerability marking result sent by the testing module, taking the vulnerability marking result as the vulnerability data to be repaired as vulnerability data to be repaired, and taking the vulnerability data with the vulnerability marking result of misreport as false-reported vulnerability data;
vulnerability processing, namely sending vulnerability data to be repaired to a development module for vulnerability processing by developers, sending the vulnerability data to be repaired which is marked as false alarm after the vulnerability processing as second false alarm vulnerability data to an arbitration module for arbitration processing by a security specialist, sending the vulnerability data to be repaired which is marked as to be verified and is subjected to the repair processing after the vulnerability processing to a test module for verification processing by a tester;
and arbitration processing, namely sending the second false-report bug data to an arbitration module for arbitration processing by a security specialist and obtaining an arbitration result, sending the second false-report bug data to be repaired as the arbitration result to a development module for repairing a target application system by a developer according to the second false-report bug data, sending the repaired second false-report bug data to a test module for verification processing by the tester, and recording the second false-report bug data with the arbitration result being false report as false-report bug data.
And further calculating the arbitration rate according to the total amount of the second false-alarm vulnerability data and the total amount of the vulnerability data except the first false-alarm vulnerability data in the test result.
And further, calculating the test false alarm rate, and calculating the test false alarm rate according to the total quantity of second false alarm vulnerability data with false alarms as an arbitration result and the total quantity of vulnerability data except the first false alarm vulnerability data in the test result.
And further calculating development misjudgment rate, and calculating the development misjudgment rate according to the arbitration result, namely the total quantity of the second false-alarm vulnerability data to be repaired and the total quantity of the vulnerability data except the first false-alarm vulnerability data in the test result.
Further, the verification processing is to test the target application system according to the to-be-repaired vulnerability data which is marked to be verified and subjected to repair processing after the vulnerability processing or the second false-positive vulnerability data which is subjected to repair processing, if no corresponding vulnerability exists, the to-be-repaired vulnerability data which is marked to be verified and subjected to repair processing after the vulnerability processing or the second false-positive vulnerability data which is subjected to repair processing are used as the repaired vulnerability data, the vulnerability repairing rate is calculated after the vulnerability management, and the vulnerability repairing rate is calculated according to the number of the repaired vulnerability data and the total number of the vulnerability data except the first false-positive vulnerability data in the test result.
Furthermore, the vulnerability processing also comprises the steps of taking the vulnerability data to be repaired which is marked as unmodified after the vulnerability processing as preliminary unmodified vulnerability data, sending the preliminary unmodified vulnerability data to an arbitration module, and carrying out arbitration processing by an arbitrator; the arbitration processing further comprises the steps of taking the preliminary unmodified vulnerability data with the arbitration result of unmodified as unmodified vulnerability data, sending the preliminary unmodified vulnerability data with the arbitration result of to-be-modified to a development module for repair processing by developers, sending the preliminary unmodified vulnerability data subjected to the repair processing to a test module for modification verification processing by testers, calculating the problem carry-over rate after vulnerability management, and calculating the problem carry-over rate according to the total amount of the unmodified vulnerability data and the total amount of the vulnerability data except for the first false report vulnerability data in the test result.
Further, each piece of vulnerability data comprises an application range field, a characteristic field and a non-characteristic field, the application range field comprises a version name, a system name and a tool name, and the screening of the corresponding false alarm filter in the database according to each piece of vulnerability data is specifically to screen out the corresponding false alarm filter in the database according to the version name, the system name and the tool name.
The second purpose of the invention is realized by adopting the following technical scheme:
an electronic device, comprising: a processor;
a memory; and a program, wherein the program is stored in the memory and configured to be executed by the processor, the program comprising processing means for performing automated safety test results in the present application.
The third purpose of the invention is realized by adopting the following technical scheme:
a computer-readable storage medium, on which a computer program is stored, which computer program is executed by a processor to perform the method of processing automated safety test results in the present application.
Compared with the prior art, the invention has the beneficial effects that: according to the processing method of the automatic safety test result, a corresponding false alarm filter is screened out from a database according to each piece of vulnerability data, the vulnerability data which accord with a preset false alarm vulnerability rule are marked as first false alarm vulnerability data by the false alarm filter, the first false alarm vulnerability data are used as false alarm vulnerability data, the vulnerability data except the first false alarm vulnerability data are subjected to false alarm marking processing, the vulnerability data which are marked as false alarms are processed as false alarm vulnerability data, and a target application system is subjected to repairing processing and verification processing according to the vulnerability data which are marked as to be repaired; calculating the false alarm rate of the test tool corresponding to the automatic safety test tool according to the total amount of the false alarm vulnerability data and the total amount of the vulnerability data in the test result; the method and the device have the advantages that misjudgment data of the automatic safety testing tool are eliminated and analyzed, the accuracy of safety evaluation of the tested target application system is improved, and more accurate bug repairing of the target application system is realized.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical solutions of the present invention more clearly understood and to implement them in accordance with the contents of the description, the following detailed description is given with reference to the preferred embodiments of the present invention and the accompanying drawings. The detailed description of the present invention is given in detail by the following examples and the accompanying drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart illustrating a method for processing an automated security test result according to the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and the detailed description, and it should be noted that any combination of the embodiments or technical features described below can be used to form a new embodiment without conflict.
As shown in fig. 1, the method for processing an automated safety test result in the present application specifically includes the following steps:
and collecting test results, collecting test results obtained after the automatic safety test tool carries out safety test on the target application system, and importing all the test results into a database, wherein the test results contain a plurality of pieces of vulnerability data. Each kind of automatic safety testing tool all can generate a test result to the safety test of each side of single target application system, collects the test result after different automatic safety testing tools carry out the safety test to different application systems through the report server in this embodiment, namely the test result after the automatic safety testing tool carries out the test to the application system that needs to carry out the safety test all can be uploaded to the report server. And establishing a corresponding report file for the version-system-test tool according to the application directory in the report server for storing each test result. And simultaneously, deploying a monitoring processing program on the report server, scanning folders corresponding to different application directories by the monitoring processing program at regular time, reading the report file once finding a report file with a newly-added test result, importing all the test results in the report file into a database, creating an archiving directory on the report server in the embodiment, moving the processed report file to the archiving directory, keeping the consistency of the directory, and renaming the renamed file automatically according to the moving date and time.
In the application, each test result contains a plurality of pieces of vulnerability data, and each piece of vulnerability data comprises three types of original data fields, namely an application range field, a characteristic field and a non-characteristic field. Application scope field: the method is used for identifying whether the items are unified items, and comprises a version name, a system name and a tool name, wherein the same version name, the same system name and the same tool name can be identified as the same item. A characteristic field: whether the same problem is found in an auxiliary manner and whether the problem is a false report can be determined based on the combination of these fields, and the corresponding feature fields of different automated safety testing tools are slightly different, for example, for the testing result of a DAST-like tool, the feature fields are generally: problem type, trigger address, trigger parameter, request message, return message; for test results of SAST class tools, the feature fields are typically: question type, source code path, source code file name, question code line, question code segment, etc.; for test results of IAST type tools, the characteristic fields are generally: problem type, trigger address, source code path, source code file name, request message and return message. The non-characteristic field is some auxiliary information fields, such as: problem severity, test time, problem hazard, repair recommendations, judgment basis, and the like.
In this embodiment, after the database receives the test result, new project processing, deduplication processing, and newly-added problem recording processing are performed. The new project processing comprises the steps of firstly, judging whether a new project is added in a database or not according to an application range field of vulnerability data in a test result, if the new project is added, adding corresponding project information in the database, wherein the project information comprises a version name, an application name and a tool name; if the same automated security testing tool applied this time has projects of different application system versions, the information of the filter, personnel, policy and the like of the old version is automatically copied and inherited, and after the project is created, an email is sent to a tie person (generally an administrator, configurable). And (4) performing deduplication processing, namely judging whether the record is recorded in a problem table summarized by a database or not according to the characteristic field in each piece of vulnerability data, if the same record exists in the same item in the problem table, judging that the record is a repeated record, not increasing the record any more, not changing the information such as the marked content of the original problem and the like, but updating the final storage date of the problem. Adding new problem records, namely inserting non-repeated records in an original problem table in a database into the problem table, adding fields of 'final warehousing time', 'test state', 'tester', 'development state', 'developer', 'arbitration state', 'arbitrator', 'bug state' and 'automatic labeling' to each record besides fields in the original table, updating all the problem record states to be confirmed, and leading in problems by two leading-in strategies, namely 'only adding strategies' and 'full strategy'. Under a newly added strategy, only the problem records which are not imported are imported, under a full-scale strategy, the result of each import is a full-scale record, if a certain record is originally imported, and if the problem is not repaired at the time, the problem is considered to be repaired, the record is marked as 'repaired' in the column of 'vulnerability state', and whether the record is automatically marked as 'yes' is judged. Two different policies allow an administrator to configure and modify them in project configuration. Default to "only add new strategy".
And (3) false alarm filtering, namely screening out a corresponding false alarm filter in the database according to each piece of vulnerability data, using the false alarm filter to mark the vulnerability data which accords with a preset false alarm vulnerability rule as first false alarm vulnerability data, and using the first false alarm vulnerability data as false alarm vulnerability data. In the embodiment, the filter in the database is generated by storing sql of the query condition, and the filter can be migrated and inherited; the filter can be manually shielded or the 'false alarm marking' is carried out on the project after the filter set is changed and becomes effective; the labeling authority of the filter is in the role of "tester".
And managing the loopholes, namely performing false alarm marking processing on the loophole data except the first false alarm loophole data, taking the loophole data which is subjected to the false alarm marking processing as false alarm loophole data, and performing repairing processing and verification processing on the target application system according to the loophole data which is subjected to the false alarm marking processing and is to be repaired. The vulnerability management specifically comprises the following substeps:
the vulnerability marking, namely sending the vulnerability data except the first misreported vulnerability data to a testing module for vulnerability marking by a tester, receiving a vulnerability marking result sent by the testing module, taking the vulnerability marking result as the vulnerability data to be repaired as vulnerability data to be repaired, and taking the vulnerability data with the vulnerability marking result of misreport as false-reported vulnerability data;
vulnerability processing, namely transmitting vulnerability data to be repaired to a development module for vulnerability processing by developers, transmitting the vulnerability data to be repaired which is marked as false report after the vulnerability processing as second false report vulnerability data to an arbitration module for arbitration processing by a security specialist, transmitting the vulnerability data to be repaired which is marked as to be verified and is subjected to the repair processing after the vulnerability processing to a test module for verification processing by the tester, transmitting the vulnerability data which is not primarily modified to the arbitration module for arbitration processing by the arbiter; the verification process in this step is specifically: and testing the target application system according to the data to be repaired, which is marked as to-be-verified and subjected to repair processing after the bug processing, and if the corresponding bug does not exist, taking the data to be repaired, which is marked as to-be-verified and subjected to repair processing after the bug processing, as the repaired bug data.
Arbitration processing, namely sending the second false-report bug data to an arbitration module for arbitration processing by a security specialist and obtaining an arbitration result, sending the second false-report bug data to be repaired as the arbitration result to a development module for repairing a target application system by a developer according to the second false-report bug data, sending the repaired second false-report bug data to a test module for verification processing by the tester, and recording the second false-report bug data with the arbitration result being false report as false-report bug data; and sending the preliminary unmodified vulnerability data with the arbitration result of unmodified to a development module for repair processing by developers, sending the preliminary unmodified vulnerability data with the arbitration result of unmodified to a test module for modification verification processing by the testers, and sending the preliminary unmodified vulnerability data subjected to the repair processing to the development module. The verification process in this step is: and testing the target application system according to the repaired second false alarm vulnerability data, and if the corresponding vulnerability does not exist, using the repaired second false alarm vulnerability data as repaired vulnerability data. In this embodiment, the method further includes querying, and a tester or a developer queries the vulnerability data through the determined "version number", "system name", and "tool name", for example: v202011+ loan product system + Apscan, V202011 version number, loan product system name, Apscan tool name, IBM's Rational software department of network security testing and monitoring tools. For the feature field, a feature of the expression class is determined by "including", "not including", "equal to", "not equal to" a certain value or values. And determining the result which needs to be marked as false alarm by combining logic and logical OR and letting the conditions stand simultaneously. After inquiry, the labels can be selected manually or selected in batch. In addition, (3) a "false positive" filter is generated by querying the condition. For a query condition, it can be saved as a filter, provided it can act as a generic filter for this item. After saving, the corresponding sql query statement will be saved in the database, and when this item is imported into the result report again, these filters will label automatically "false alarm".
And calculating the false alarm rate of the test tool, and calculating the false alarm rate of the test tool corresponding to the automatic safety test tool according to the total number of the false alarm vulnerability data and the total number of the vulnerability data in the test result.
Calculating an arbitration rate and a test false alarm rate, wherein the arbitration rate is calculated according to the total amount of second false alarm vulnerability data and the total amount of vulnerability data except the first false alarm vulnerability data in the test result; and calculating the test false alarm rate according to the total amount of the second false alarm vulnerability data with the arbitration result as false alarm and the total amount of the vulnerability data except the first false alarm vulnerability data in the test result. In this embodiment, the higher the arbitration rate is, it is proved that the development staff and the testing staff in the project team of the security test have a greater divergence in the setting up of the vulnerability standard, and a manager needs to organize training or communication. The higher the false test report rate, the more the testers in the project team need to strengthen the safety test technical capability. And the arbitration rate is the total amount of the second false alarm vulnerability data/the total amount of vulnerability data except the first false alarm vulnerability data in the test result. And the test false alarm rate is the total amount of the second false alarm bug data with false alarm/the total amount of the bug data except the first false alarm bug data in the test result.
And calculating the development misjudgment rate, and calculating the development misjudgment rate according to the arbitration result as the total quantity of the second false-alarm vulnerability data to be repaired and the total quantity of the vulnerability data except the first false-alarm vulnerability data in the test result. The higher the development misjudgment rate is, the lower the application system safety attention of developers in a project team is proved, or technical short boards exist, and training and communication are needed. And the development misjudgment rate is the arbitration result which is the total amount of the second false-alarm bug data to be repaired/the total amount of the bug data except the first false-alarm bug data in the test result.
And calculating the bug fixing rate according to the total amount of the fixed bug data and the total amount of the bug data except the first false-reported bug data in the test result. The bug fix rate is the number of fixed bug data/the total number of bug data except the first false-reported bug data in the test result. The high vulnerability repair rate proves that the security problem in a project team is high in attention degree and strong in technical capability.
And calculating the problem leaving rate, and calculating the problem leaving rate according to the total amount of the unmodified vulnerability data and the total amount of the vulnerability data except the first false-report vulnerability data in the test result. And the problem leaving rate is the total amount of the unmodified bug data/the total amount of the bug data except the first false-report bug data in the test result. The higher the problem remaining rate is, the higher the complexity of the application system is proved, the higher risk is introduced for problem repair, and special reconstruction and correction are suggested to be carried out timely.
Calculating an overall risk coefficient, taking the vulnerability data except the false-alarm vulnerability data and the repaired vulnerability data in the test result as unrepaired vulnerability data, firstly screening out risk levels corresponding to the vulnerability data except the false-alarm vulnerability data and the repaired vulnerability data according to a preset vulnerability risk level rule, and then accumulating all the risk levels to obtain the overall risk coefficient. The method specifically comprises the following steps: the risk corresponding to each vulnerability in the preset vulnerability risk level rule corresponding to each application system can be divided into m levels, and each level corresponds to a risk coefficient kmAnd n ismThe sum of the number of the unrepaired vulnerabilities (false alarm removal and fixed vulnerability removal) of a certain risk level is the overall risk coefficient of the application system to be tested. 1<K<2 and the higher the vulnerability risk at that level, the larger k. The importance degree of the tested application system in the whole terminal system can also be divided into a plurality of grades, wherein each grade corresponds to a key coefficient p (1)<p), the risk coefficient corresponding to the terminal system containing a plurality of application systems is set to be L, and then
Figure BDA0002998930940000111
The present application further provides an electronic device, comprising: a processor;
a memory; and a program, wherein the program is stored in the memory and configured to be executed by the processor, the program comprising processing means for performing automated safety test results in the present application.
The present application also provides a computer-readable storage medium having stored thereon a computer program for execution by a processor of the automated safety test result processing method of the present application.
According to the processing method of the automatic safety test result, a corresponding false alarm filter is screened out from a database according to each piece of vulnerability data, the vulnerability data which accord with a preset false alarm vulnerability rule are marked as first false alarm vulnerability data by the false alarm filter, the first false alarm vulnerability data are used as false alarm vulnerability data, the vulnerability data except the first false alarm vulnerability data are subjected to false alarm marking processing, the vulnerability data which are marked as false alarms are processed as false alarm vulnerability data, and a target application system is subjected to repairing processing and verification processing according to the vulnerability data which are marked as to be repaired; calculating the false alarm rate of the test tool corresponding to the automatic safety test tool according to the total amount of the false alarm vulnerability data and the total amount of the vulnerability data in the test result; the method has the advantages that misjudgment data of the automatic safety testing tool are removed and analyzed, the accuracy of safety evaluation of a tested target application system is improved, and more accurate bug repairing of the target application system is realized; and the safety technical ability of a project team, the effectiveness of a test tool and the workload of a safety specialist can be correspondingly evaluated, so that a management layer is assisted to make decisions and adjust.
The foregoing is merely a preferred embodiment of the invention and is not intended to limit the invention in any manner; those skilled in the art can readily practice the invention as shown and described in the drawings and detailed description herein; however, those skilled in the art should appreciate that they can readily use the disclosed conception and specific embodiments as a basis for designing or modifying other structures for carrying out the same purposes of the present invention without departing from the scope of the invention as defined by the appended claims; meanwhile, any changes, modifications, and evolutions of the equivalent changes of the above embodiments according to the actual techniques of the present invention are still within the protection scope of the technical solution of the present invention.

Claims (10)

1. The processing method of the automatic safety test result is characterized in that: the method comprises the following steps:
collecting test results, collecting test results obtained after an automatic safety test tool carries out safety test on a target application system, and importing all the test results into a database, wherein the test results contain a plurality of pieces of vulnerability data;
false alarm filtering, namely screening out a corresponding false alarm filter in a database according to each piece of vulnerability data, using the false alarm filter to mark the vulnerability data which accords with a preset false alarm vulnerability rule as first false alarm vulnerability data, and using the first false alarm vulnerability data as false alarm vulnerability data;
the vulnerability management comprises the steps of carrying out false alarm marking processing on the vulnerability data except the first false alarm vulnerability data, using the vulnerability data which is subjected to false alarm marking processing to be false alarm as false alarm vulnerability data, and carrying out repairing processing and verification processing on a target application system according to the vulnerability data which is subjected to false alarm marking processing to be repaired;
and calculating the false alarm rate of the test tool, and calculating the false alarm rate of the test tool corresponding to the automatic safety test tool according to the total number of the false alarm vulnerability data and the total number of the vulnerability data in the test result.
2. The method for automated security test result processing of claim 1, wherein: the vulnerability management comprises the following steps:
the vulnerability marking, namely sending the vulnerability data except the first misreported vulnerability data to a testing module for vulnerability marking by a tester, receiving a vulnerability marking result sent by the testing module, taking the vulnerability marking result as the vulnerability data to be repaired as vulnerability data to be repaired, and taking the vulnerability data with the vulnerability marking result of misreport as false-reported vulnerability data;
vulnerability processing, namely sending vulnerability data to be repaired to a development module for vulnerability processing by developers, sending the vulnerability data to be repaired which is marked as false alarm after the vulnerability processing as second false alarm vulnerability data to an arbitration module for arbitration processing by a security specialist, sending the vulnerability data to be repaired which is marked as to be verified and is subjected to the repair processing after the vulnerability processing to a test module for verification processing by a tester;
and arbitration processing, namely sending the second false-report bug data to an arbitration module for arbitration processing by a security specialist and obtaining an arbitration result, sending the second false-report bug data to be repaired as the arbitration result to a development module for repairing a target application system by a developer according to the second false-report bug data, sending the repaired second false-report bug data to a test module for verification processing by the tester, and recording the second false-report bug data with the arbitration result being false report as false-report bug data.
3. The method for automated security test result processing of claim 2, wherein: and calculating the arbitration rate according to the total amount of the second false-alarm vulnerability data and the total amount of vulnerability data except the first false-alarm vulnerability data in the test result.
4. The method for automated security test result processing of claim 2, wherein: and calculating the false testing report rate, wherein the false testing report rate is calculated according to the total quantity of second false-report vulnerability data with false reports as arbitration results and the total quantity of vulnerability data except the first false-report vulnerability data in the testing results.
5. The method for automated security test result processing of claim 2, wherein: and calculating the development misjudgment rate, wherein the development misjudgment rate is calculated according to the total amount of the second false-alarm vulnerability data to be repaired and the total amount of the vulnerability data except the first false-alarm vulnerability data in the test result.
6. The method for automated security test result processing of claim 2, wherein: the verification processing is to test the target application system according to the to-be-repaired vulnerability data which is marked to be verified and subjected to repair processing after vulnerability processing or the second false-positive vulnerability data which is subjected to repair processing, if no corresponding vulnerability exists, the to-be-repaired vulnerability data which is marked to be verified and subjected to repair processing after vulnerability processing or the second false-positive vulnerability data which is subjected to repair processing are used as repaired vulnerability data, the vulnerability repair rate is calculated after vulnerability management, and the vulnerability repair rate is calculated according to the number of the repaired vulnerability data and the total number of the vulnerability data except the first false-positive vulnerability data in the test result.
7. The method for automated security test result processing of claim 2, wherein: the vulnerability processing also comprises the steps of taking the vulnerability data to be repaired which is marked as unmodified after the vulnerability processing as initial unmodified vulnerability data, sending the initial unmodified vulnerability data to an arbitration module, and carrying out arbitration processing by an arbitrator; the arbitration processing further comprises the steps of taking the preliminary unmodified vulnerability data with the arbitration result of unmodified as unmodified vulnerability data, sending the preliminary unmodified vulnerability data with the arbitration result of to-be-modified to a development module for repair processing by developers, sending the preliminary unmodified vulnerability data subjected to the repair processing to a test module for modification verification processing by testers, calculating the problem carry-over rate after vulnerability management, and calculating the problem carry-over rate according to the total amount of the unmodified vulnerability data and the total amount of the vulnerability data except for the first false report vulnerability data in the test result.
8. The method for automated security test result processing of claim 1, wherein: each piece of vulnerability data comprises an application range field, a characteristic field and a non-characteristic field, the application range field comprises a version name, a system name and a tool name, and the screening of the corresponding false alarm filter in the database according to each piece of vulnerability data is specifically realized by screening the corresponding false alarm filter in the database according to the version name, the system name and the tool name.
9. An electronic device, characterized by comprising: a processor;
a memory; and a program, wherein the program is stored in the memory and configured to be executed by the processor, the program comprising processing means for performing the automated safety test results of any of claims 1-8.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program is executed by a processor to perform the method of processing automated safety test results of any of claims 1-8.
CN202110339438.2A 2021-03-30 2021-03-30 Method, device and medium for processing automatic safety test result Pending CN113254938A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110339438.2A CN113254938A (en) 2021-03-30 2021-03-30 Method, device and medium for processing automatic safety test result

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110339438.2A CN113254938A (en) 2021-03-30 2021-03-30 Method, device and medium for processing automatic safety test result

Publications (1)

Publication Number Publication Date
CN113254938A true CN113254938A (en) 2021-08-13

Family

ID=77181449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110339438.2A Pending CN113254938A (en) 2021-03-30 2021-03-30 Method, device and medium for processing automatic safety test result

Country Status (1)

Country Link
CN (1) CN113254938A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117061222A (en) * 2023-09-12 2023-11-14 北京安全共识科技有限公司 Vulnerability data acquisition method and vulnerability verification method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117061222A (en) * 2023-09-12 2023-11-14 北京安全共识科技有限公司 Vulnerability data acquisition method and vulnerability verification method

Similar Documents

Publication Publication Date Title
US8566903B2 (en) Enterprise evidence repository providing access control to collected artifacts
US7594142B1 (en) Architecture for automated detection and analysis of security issues
CN111488578A (en) Continuous vulnerability management for modern applications
US9558230B2 (en) Data quality assessment
US9047164B2 (en) Calculating defect density by file and source module
CN111240994B (en) Vulnerability processing method and device, electronic equipment and readable storage medium
US7757125B2 (en) Defect resolution methodology and data defects quality/risk metric model extension
Ozment Vulnerability discovery & software security
US20070233414A1 (en) Method and system to develop a process improvement methodology
CN113469857A (en) Data processing method and device, electronic equipment and storage medium
CN115952081A (en) Software testing method, device, storage medium and equipment
CN113254938A (en) Method, device and medium for processing automatic safety test result
CN114528201A (en) Abnormal code positioning method, device, equipment and medium
CN110471912B (en) Employee attribute information verification method and device and terminal equipment
EP3818437B1 (en) Binary software composition analysis
CN116471131B (en) Processing method and processing device for logical link information asset
US7822721B2 (en) Correction server for large database systems
Nakamura Towards unified vulnerability assessment with open data
US20220414210A1 (en) Malicious data access as highlighted graph visualization
Chan et al. Extracting vulnerabilities from github commits
US20230418939A1 (en) Method for managing externally imported files, apparatus for the same, computer program for the same, and recording medium storing computer program thereof
CN114817929B (en) Method and device for dynamically tracking and processing vulnerability of Internet of things, electronic equipment and medium
KR102011694B1 (en) Public institutional income property linkage data verification system and its recording medium
CN112486823B (en) Error code verification method and device, electronic equipment and readable storage medium
CN114461609A (en) Data processing method, device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination