CN113254932B - Application risk detection method and device, electronic equipment and medium - Google Patents
Application risk detection method and device, electronic equipment and medium Download PDFInfo
- Publication number
- CN113254932B CN113254932B CN202110669011.9A CN202110669011A CN113254932B CN 113254932 B CN113254932 B CN 113254932B CN 202110669011 A CN202110669011 A CN 202110669011A CN 113254932 B CN113254932 B CN 113254932B
- Authority
- CN
- China
- Prior art keywords
- application program
- detection
- application
- determining
- risk
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 215
- 238000012545 processing Methods 0.000 claims abstract description 42
- 238000000034 method Methods 0.000 claims description 32
- 238000012797 qualification Methods 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 12
- 230000004044 response Effects 0.000 claims description 12
- 238000010586 diagram Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 8
- 230000006399 behavior Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/56—Computer malware detection or handling, e.g. anti-virus arrangements
- G06F21/562—Static detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/56—Computer malware detection or handling, e.g. anti-virus arrangements
- G06F21/566—Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Virology (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Stored Programmes (AREA)
Abstract
The disclosure provides a risk detection method, device, equipment, medium and product of an application program, relates to the technical field of computers, and particularly relates to the fields of safety and application compliance. The risk detection method of the application program comprises the following steps: determining the category of the application program based on the description information of the application program; determining a target detection strategy corresponding to the category from at least one detection strategy based on the category; processing the application program based on the target detection strategy to obtain detection data; based on the detection data, it is determined whether the application is at risk.
Description
Technical Field
The present disclosure relates to the field of computer technology, and more particularly, to the field of security and application compliance, and more particularly, to a risk detection method, apparatus, electronic device, medium, and program product for an application program.
Background
In the related art, it is often necessary to detect whether an application program is at risk. However, the related art has poor detection effect when detecting whether an application program has a risk, and it is difficult to satisfy the risk detection requirements in various scenes.
Disclosure of Invention
The disclosure provides a risk detection method, a risk detection device, an electronic device, a storage medium and a program product of an application program.
According to an aspect of the present disclosure, there is provided a risk detection method for an application program, including: determining the category of the application program based on the description information of the application program; determining a target detection strategy corresponding to the category from at least one detection strategy based on the category; processing the application program based on the target detection strategy to obtain detection data; based on the detection data, it is determined whether the application is at risk.
According to another aspect of the present disclosure, there is provided a risk detection apparatus for an application program, including: the device comprises a first determining module, a second determining module, a processing module and a third determining module. The first determining module is used for determining the category of the application program based on the description information of the application program; a second determining module, configured to determine, based on the category, a target detection policy corresponding to the category from at least one detection policy; the processing module is used for processing the application program based on the target detection strategy to obtain detection data; and a third determining module, configured to determine whether the application program is at risk based on the detection data.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor and a memory communicatively coupled to the at least one processor. Wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the risk detection method for an application as described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the risk detection method of the application program described above.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the risk detection method of an application program described above.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 schematically illustrates an application scenario of a risk detection method and apparatus of an application according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a risk detection method of an application according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a schematic diagram of a risk detection method of an application according to another embodiment of the present disclosure;
FIG. 4 schematically illustrates a schematic diagram of a risk detection method of an application according to another embodiment of the present disclosure;
FIG. 5 schematically illustrates a block diagram of a risk detection apparatus of an application according to an embodiment of the present disclosure; and
fig. 6 is a block diagram of an electronic device for performing risk detection of an application to implement an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Where expressions like at least one of "A, B and C, etc. are used, the expressions should generally be interpreted in accordance with the meaning as commonly understood by those skilled in the art (e.g.," a system having at least one of A, B and C "shall include, but not be limited to, a system having a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
The embodiment of the disclosure provides a risk detection method for an application program. The risk detection method of the application program comprises the following steps: based on the description information of the application program, the category of the application program is determined. Then, based on the category, a target detection policy corresponding to the category is determined from the at least one detection policy, and the application program is processed based on the target detection policy, thereby obtaining detection data. Next, based on the detection data, it is determined whether the application is at risk.
Fig. 1 schematically illustrates an application scenario of a risk detection method and apparatus of an application program according to an embodiment of the present disclosure. It should be noted that fig. 1 illustrates only an example of an application scenario in which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but it does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments, or scenarios.
As shown in fig. 1, an application scenario 100 according to this embodiment may include application programs 101, 102, 103 and an electronic device 104.
The electronic device 104 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop computers, desktop computers, servers, and the like. The electronic device 104 of the disclosed embodiments may, for example, run applications 101, 102, 103.
It should be noted that, the risk detection method of the application program provided by the embodiments of the present disclosure may be executed by the electronic device 104. Accordingly, the risk detection apparatus for an application provided in the embodiments of the present disclosure may be disposed in the electronic device 104.
The electronic device 104 may process the application 101, 102, 103 in order to perform risk detection of the application 101, 102, 103. For example, the electronic device 104 may run the application 101, 102, 103 and determine whether the application 101, 102, 103 is at risk based on the result of the run. Alternatively, the electronic device 104 may process the relevant files of the application 101, 102, 103 to determine if the application 101, 102, 103 is at risk.
The embodiment of the present disclosure provides a risk detection method for an application program, and the risk detection method for an application program according to an exemplary embodiment of the present disclosure is described below with reference to fig. 2 to 4 in conjunction with the application scenario of fig. 1. The risk detection method of the application of the embodiments of the present disclosure may be performed by the electronic device 104 shown in fig. 1, for example.
Fig. 2 schematically illustrates a flowchart of a risk detection method of an application according to an embodiment of the present disclosure.
As shown in fig. 2, the risk detection method 200 of the application program of the embodiment of the present disclosure may include, for example, operations S210 to S240.
In operation S210, a category of the application program is determined based on the description information of the application program.
In operation S220, a target detection policy corresponding to the category is determined from the at least one detection policy based on the category.
In operation S230, the application is processed based on the target detection policy, resulting in detection data.
In operation S240, it is determined whether the application program is at risk based on the detection data.
Illustratively, the description information of the application includes, but is not limited to, the name of the application, and profile information of the application. The category of the application program can be known based on the description information of the application program. The categories of applications may be categorized by industry, for example categories including, but not limited to, medical, educational, financial, internet of vehicles, games.
For different classes of applications, the applications may be processed with different detection strategies in order to detect whether the applications are at risk. For example, for a current application, a detection policy corresponding to the class of the current application is determined as a target detection policy from at least one detection policy stored in a policy repository based on the class of the current application, and the current application is processed based on the target detection policy. Each detection policy stored in the policy repository has, for example, a corresponding tag that indicates, for example, which categories of applications the detection policy is suitable for risk detection. Wherein, a detection strategy can have a plurality of labels, each label corresponding to a category.
Processing the current application program based on the target detection strategy can obtain detection data, and then carrying out data analysis on the detection data can determine whether the application program has risk.
For example, the detection data may be processed based on preset risk rules corresponding to the categories of the application. The preset risk rules corresponding to each category may be different. For example, taking the example that the detected data comprises data traffic, the data traffic characterizing the risk of different classes of applications is for example different. For example, for an application of the first category, a risk is indicated when the data traffic exceeds a first threshold, at which point a preset risk rule corresponding to the first category, for example, the data traffic exceeds the first threshold. For example, for an application of the second class, a risk is indicated when the data traffic exceeds a second threshold, at which time the preset risk rule corresponding to the second class exceeds the second threshold, for example, the data traffic is different from the first threshold, i.e. the preset risk rule corresponding to the first class is different from the preset risk rule corresponding to the second class.
It will be appreciated that embodiments of the present disclosure process an application by determining a class of the application, then employing different detection strategies for different classes, in order to determine whether the application is at risk. Therefore, through the embodiment of the disclosure, different risk detection can be executed for different types of application programs, the risk detection effect is improved, and the detection requirements under different scenes are met.
Fig. 3 schematically illustrates a schematic diagram of a risk detection method of an application according to an embodiment of the present disclosure.
As shown in fig. 3, an embodiment of the present disclosure includes, for example, a plurality of applications 301, a policy repository 303, and a rules repository 307.
The plurality of applications 301 include, for example, application a, application B, and application C, and the categories of application a, application B, and application C are, for example, category a, category B, and category C, respectively. Policy store 303 includes, for example, at least a generic policy set and a category policy set. The generic set of policies includes, for example, the detection policy t 1 And detection strategy t 2 Detection strategy t 1 And detection strategy t 2 For example, each may be applicable to each class of application. The set of class policies includes, for example, a detection policy t 3 Detection strategy t 4 And detection strategy t 5 Detection strategy t 3 Detection strategy t 4 And detection strategy t 5 For example, for category a, category b and category c, respectively.
Taking application a as an example, the category of application a is determined to be category a based on the description information of application a. Then, based on the category a, a detection policy corresponding to the category a is determined from the policy library 303. For example, all detection policies in the generic policy set correspond to category a, and detection policy t corresponding to category a is determined from the category policy set 3 As a candidate detection strategy. Then, all detection policies in the common policy set and the candidate detection policies (detection policy t 3 ) As a target detection policy for application a.
Next, application a is processed based on the target detection policy, resulting in detection data 305. Then, a corresponding preset risk rule r is determined from the rule base 307 based on the category a 1 By using a preset risk rule r 1 Processing the detection data 305 yields risk information 309, the risk information 309 being used to characterize whether the application a is at risk. Different preset risk rules in the rule base 307 are applicable for example to different classes of applications.
According to the embodiment of the disclosure, the detection data is obtained by configuring different detection strategies for different types of application programs and processing the application programs of the corresponding types by utilizing the corresponding detection strategies. And then determining whether the application program has risks based on the detection data by utilizing preset risk rules corresponding to the categories, thereby realizing targeted detection of the risks of the application program and improving the accuracy of risk detection of the application programs in different categories.
Fig. 4 schematically illustrates a schematic diagram of a risk detection method of an application according to another embodiment of the present disclosure.
As shown in fig. 4, embodiments of the present disclosure include, for example, a plurality of applications 401, a category identification model 402, a policy repository 403, a behavior trigger engine 404, a rule repository 407, and a rule engine 408.
The plurality of applications 401 includes, for example, application a, application B, and application C, and categories of application a, application B, and application C are, for example, category a, category B, and category C, respectively. The policy repository 403 includes, for example, at least a general policy set, a category policy set, and a feature policy set. The generic set of policies includes, for example, the detection policy t 1 And detection strategy t 2 Detection strategy t 1 And detection strategy t 2 For example, each may be applicable to each class of application. The set of class policies includes, for example, a detection policy t 3 Detection strategy t 4 And detection strategy t 5 Detection strategy t 3 Detection strategy t4 and detection strategy t 5 For example, for category a, category b and category c, respectively. The special policy set comprises, for example, a detection policy t 6 The detection policy in the special policy set is, for example, a detection policy temporarily increased according to the actual situation.
Taking application a as an example, the description information is processed by using the trained class identification model 402 to obtain a class of the application, for example, class a. The input of the trained class identification model 402 is, for example, descriptive information of the application, and the output is, for example, the class of the application.
In addition to determining the class of the application based on the description information of the application, for the current application, the class of the current application may also be determined based on the plurality of applications and the corresponding classes stored in the intelligence repository. The category of the application program in the information library is obtained by a crawler technology, for example, and can be preset by a user. For example, the information library stores a category a corresponding to the application program a and a category B corresponding to the application program B, and if the current application program is the application program a, the category a of the current application program is obtained from the information library.
Then, based on the category a, a detection policy corresponding to the category a is determined from the policy library 403. For example, all detection policies in the generic policy set correspond to category a, and detection policy t corresponding to category a is determined from the category policy set 3 As a candidate detection strategy. Then, all detection policies in the common policy set and the candidate detection policies (detection policy t 3 ) As a target detection policy for application a. If the special policy set has a detection policy corresponding to class a, the detection policy may be treated as a target detection policy for application A.
Next, the behavior trigger engine 404 is invoked based on the target detection policy, and application a is processed based on the target detection policy with the behavior trigger engine 404, resulting in detection data 405. Then, a corresponding preset risk rule r is determined from the rule base 407 based on the category a 1 Based on preset risk rule r 1 Invoking the rule engine 408, and utilizing the rule engine 408 to base the preset risk rule r 1 Processing the detection data 405 results in risk information 409, the risk information 409 being used to characterize whether the application a is at risk. Different preset risk rules in the rule base 407 are applicable for example to different classes of applications.
According to the embodiment of the disclosure, by configuring different detection strategies and preset risk rules for different types of application programs and determining whether the application programs have risks or not through the corresponding detection strategies and the corresponding preset risk rules, the risks of the application programs are detected in a targeted manner, and the risk detection method of the embodiment of the disclosure is suitable for risk detection of the application programs of different types.
In an example, the target detection policy corresponding to the class of the application includes, for example, running the application. Processing the application program based on the target detection strategy comprises operating the application program to obtain operation data, and taking the operation data as detection data. Determining whether an application is at risk based on the detection data includes, for example, but is not limited to, the following examples.
For example, based on the detected data, it is determined whether rights to collect data are obtained during the running of the application. If it is determined that the permission to collect data is not obtained during the running of the application, it is determined that the application is at risk. The collected data includes, but is not limited to, user information and other relevant information. For example, taking the category of the application program as the category of the internet of vehicles as an example, the collected data includes personal information of the user and driving information of the vehicle, for example. If the application program does not acquire the authorization of the user in the running process and performs data acquisition, the running of the application program is not in accordance with the relevant regulations, namely, the application program is determined to have risk.
For example, based on the detection data, it is determined whether a file for the application is exposed during running of the application. If it is determined that the file for the application is not exposed during the running of the application, it is determined that the application is at risk. For example, taking the class of the application program as an education class, when the application program is executed (for example, when the user registers), the relevant files of the education institution are not displayed, which indicates that the application program is executed without conforming to the relevant rules, that is, it is determined that the application program is at risk. The relevant documents of the educational institution include, for example, qualification documents of the educational institution, teacher qualification documents, and the like.
For example, based on the detected data, it is determined whether the data collected during the running of the application is data that is prohibited from being collected. If the data acquired in the process of running the application program is determined to be the data which is forbidden to be acquired, determining that the application program has risk. For example, some types of data are prohibited from being collected, and if the type of data is collected during the running of the application, this indicates that the running of the application does not meet the relevant specifications, i.e., it is determined that the application is at risk.
In an embodiment of the present disclosure, the running data is obtained by running the application, and risk information existing in the application of different categories is determined based on the running data for the application of different categories. Therefore, the risk detection method is suitable for risk detection in different scenes by detecting the risk of the application program in a targeted manner.
In another example, the object detection policy corresponding to the class of the application includes, for example, processing a file of the application. Processing the application program based on the target detection strategy comprises processing the file of the application program to obtain qualification information in the file, and taking the qualification information as detection data. If the qualification information is invalid, determining that the application program has risk. For example, taking the class of the application as an educational class, the files of the application include, for example, qualification files of educational institutions, teacher qualification files, and the like. And processing the file to obtain qualification information in the file, and determining whether the application program has risk or not based on the qualification information. For example, when the file is a picture, the image is subjected to image recognition, so as to obtain qualification information in the file, wherein the qualification information comprises an effective period, and whether the current time is within the effective period is determined. If the current time is not within the validity period, the qualification information is invalid, and the risk of the application program is determined.
In an embodiment of the disclosure, qualification information of a file is obtained by processing the file of an application program, and risk information existing in the application program of different categories is determined based on the qualification information for the application program of different categories. It can be seen that by detecting the risk of the application program in a targeted manner, the accuracy of risk detection of application programs of different categories is improved.
Fig. 5 schematically illustrates a block diagram of a risk detection apparatus of an application according to an embodiment of the present disclosure.
As shown in fig. 5, the risk detection apparatus 500 of the application program of the embodiment of the present disclosure includes, for example, a first determination module 510, a second determination module 520, a processing module 530, and a third determination module 540.
The first determination module 510 may be configured to determine a category of the application based on the description information of the application. According to an embodiment of the present disclosure, the first determining module 510 may perform, for example, the operation S210 described above with reference to fig. 2, which is not described herein.
The second determining module 520 may be configured to determine, based on the category, a target detection policy corresponding to the category from the at least one detection policy. The second determining module 520 may, for example, perform operation S220 described above with reference to fig. 2 according to an embodiment of the present disclosure, which is not described herein.
The processing module 530 may be configured to process the application based on the target detection policy to obtain detection data. According to an embodiment of the present disclosure, the processing module 530 may perform, for example, operation S230 described above with reference to fig. 2, which is not described herein.
The third determination module 540 may be configured to determine whether the application is at risk based on the detection data. According to an embodiment of the present disclosure, the third determining module 540 may perform, for example, the operation S240 described above with reference to fig. 2, which is not described herein.
According to an embodiment of the present disclosure, the at least one detection policy includes a generic policy set and a class policy set; the second determination module 520 includes: a first determination sub-module and a second determination sub-module. The first determining submodule is used for determining candidate detection strategies corresponding to the categories from the category strategy set based on the categories. And the second determining submodule is used for taking the detection strategy and the candidate detection strategy in the general strategy set as target detection strategies.
According to an embodiment of the present disclosure, the target detection policy includes running an application; the processing module 530 is further configured to: the running application obtains the running data as the detection data.
According to an embodiment of the present disclosure, the third determining module 540 includes: the third determination sub-module and the fourth determination sub-module. And the third determination submodule is used for determining whether the permission of collecting data is obtained in the process of running the application program or not based on the detection data. And the fourth determining submodule is used for determining that the application program has risk in response to determining that the permission of collecting data is not obtained in the process of running the application program.
According to an embodiment of the present disclosure, the third determining module 540 includes: a fifth determination sub-module and a sixth determination sub-module. And a fifth determining sub-module for determining whether the file for the application program is displayed in the process of running the application program based on the detection data. And a sixth determining submodule, configured to determine that the application program is at risk in response to determining that the file for the application program is not exposed during the running of the application program.
According to an embodiment of the present disclosure, the third determining module 540 includes: a seventh determination submodule and an eighth determination submodule. And a seventh determining sub-module for determining whether the data collected during the running of the application program is data which is prohibited from being collected based on the detection data. And an eighth determining submodule, configured to determine that the application program has a risk in response to determining that the data acquired during the running of the application program is the data prohibited from being acquired.
According to an embodiment of the present disclosure, the target detection policy includes processing files of the application; the processing module 530 is further configured to: processing the file of the application program to obtain qualification information in the file as detection data; the third determining module 540 is further configured to: in response to the qualification information being invalid, determining that the application is at risk.
According to an embodiment of the present disclosure, the first determining module 510 is further configured to: and processing the description information by using the trained class identification model to obtain the class of the application program.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 6 is a block diagram of an electronic device for performing risk detection of an application to implement an embodiment of the present disclosure.
Fig. 6 illustrates a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure. The electronic device 600 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the respective methods and processes described above, such as a risk detection method of an application program. For example, in some embodiments, the risk detection method of an application program may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the risk detection method of the application program described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the risk detection method of the application by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions provided by the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.
Claims (17)
1. A risk detection method for an application program, comprising:
determining the category of the application program based on the description information of the application program;
determining a target detection strategy corresponding to the category from at least one detection strategy based on the category;
processing the application program based on the target detection strategy to obtain detection data; and
based on the detection data, determining whether the application is at risk,
wherein the at least one detection policy comprises a generic policy set and a class policy set; the determining, based on the category, a target detection policy corresponding to the category from at least one detection policy includes:
determining a candidate detection strategy corresponding to the category from the category strategy set based on the category; and
and taking the detection strategies in the general strategy set and the candidate detection strategies as the target detection strategies.
2. The method of claim 1, wherein the target detection policy comprises running the application; the processing the application program based on the target detection strategy to obtain detection data comprises the following steps:
and operating the application program to obtain operation data serving as the detection data.
3. The method of claim 2, wherein the determining whether the application is at risk based on the detection data comprises:
determining whether to obtain the authority of collecting data in the process of running the application program based on the detection data; and
in response to determining that rights to collect data are not obtained during running of the application, determining that the application is at risk.
4. The method of claim 2, wherein the determining whether the application is at risk based on the detection data comprises:
determining whether a file for the application program is displayed in the process of running the application program based on the detection data; and
in response to determining that a file for the application is not shown during running of the application, determining that the application is at risk.
5. The method of claim 2, wherein the determining whether the application is at risk based on the detection data comprises:
determining whether the data acquired in the process of running the application program is the data which is forbidden to be acquired or not based on the detection data; and
in response to determining that data collected during the running of the application is data that is prohibited from being collected, determining that the application is at risk.
6. The method of claim 1, wherein the target detection policy comprises processing a file of an application;
the processing the application program based on the target detection strategy to obtain detection data comprises the following steps: processing the file of the application program to obtain qualification information in the file as the detection data;
the determining whether the application program is at risk based on the detection data includes: and determining that the application program is at risk in response to the qualification information being invalid.
7. The method of any of claims 1-6, wherein the determining the class of the application based on the application's descriptive information comprises:
and processing the description information by using a trained category identification model to obtain the category of the application program.
8. A risk detection apparatus for an application program, comprising:
the first determining module is used for determining the category of the application program based on the description information of the application program;
a second determining module, configured to determine, based on the category, a target detection policy corresponding to the category from at least one detection policy;
the processing module is used for processing the application program based on the target detection strategy to obtain detection data; and
a third determining module for determining whether the application program has a risk based on the detection data,
wherein the at least one detection policy comprises a generic policy set and a class policy set; the second determining module includes:
a first determining submodule, configured to determine, based on the category, a candidate detection policy corresponding to the category from the category policy set; and
and the second determining submodule is used for taking the detection strategies in the general strategy set and the candidate detection strategies as the target detection strategies.
9. The apparatus of claim 8, wherein the target detection policy comprises running the application; the processing module is further configured to:
and operating the application program to obtain operation data serving as the detection data.
10. The apparatus of claim 9, wherein the third determination module comprises:
a third determining submodule, configured to determine whether to obtain permission to collect data in a process of running the application program based on the detection data; and
and the fourth determining submodule is used for determining that the application program is at risk in response to determining that the authority of collecting data is not obtained in the process of running the application program.
11. The apparatus of claim 9, wherein the third determination module comprises:
a fifth determining submodule, configured to determine whether a file for the application program is shown in a process of running the application program based on the detection data; and
a sixth determination submodule is used for determining that the application program has risk in response to determining that the file for the application program is not displayed in the process of running the application program.
12. The apparatus of claim 9, wherein the third determination module comprises:
a seventh determining submodule, configured to determine, based on the detection data, whether data acquired during the running of the application program is data that is prohibited from being acquired; and
and an eighth determining submodule, configured to determine that the application program is at risk in response to determining that data acquired during the running of the application program is data that is prohibited from being acquired.
13. The apparatus of claim 8, wherein the target detection policy comprises a file of a processing application;
the processing module is further configured to: processing the file of the application program to obtain qualification information in the file as the detection data;
the third determining module is further configured to: and determining that the application program is at risk in response to the qualification information being invalid.
14. The apparatus of any of claims 8-13, wherein the first determining module is further to:
and processing the description information by using a trained category identification model to obtain the category of the application program.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110669011.9A CN113254932B (en) | 2021-06-16 | 2021-06-16 | Application risk detection method and device, electronic equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110669011.9A CN113254932B (en) | 2021-06-16 | 2021-06-16 | Application risk detection method and device, electronic equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113254932A CN113254932A (en) | 2021-08-13 |
CN113254932B true CN113254932B (en) | 2024-02-27 |
Family
ID=77188266
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110669011.9A Active CN113254932B (en) | 2021-06-16 | 2021-06-16 | Application risk detection method and device, electronic equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113254932B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113888181A (en) * | 2021-10-25 | 2022-01-04 | 支付宝(杭州)信息技术有限公司 | Business processing and risk detection strategy system construction method, device and equipment |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034058A (en) * | 2010-11-25 | 2011-04-27 | 中国联合网络通信集团有限公司 | Method for controlling safety of application software and terminal |
CN102917346A (en) * | 2012-10-17 | 2013-02-06 | 浙江大学城市学院 | Security policy management system and method for Android-based application program during operation |
WO2014133528A1 (en) * | 2013-02-28 | 2014-09-04 | Hewlett-Packard Development Company, L.P. | Determining coverage of dynamic security scans using runtime and static code analyses |
CN104216785A (en) * | 2014-08-26 | 2014-12-17 | 烽火通信科技股份有限公司 | Common policy task system and implementing method thereof |
CN104838630A (en) * | 2012-10-10 | 2015-08-12 | 思杰系统有限公司 | Policy-based application management |
CN108509796A (en) * | 2017-02-24 | 2018-09-07 | 中国移动通信集团公司 | A kind of detection method and server of risk |
CN109190374A (en) * | 2018-07-25 | 2019-01-11 | 安徽三实信息技术服务有限公司 | Application software safety detecting system and detection method in a kind of application system |
CN110198313A (en) * | 2019-05-23 | 2019-09-03 | 新华三信息安全技术有限公司 | A kind of method and device of strategy generating |
CN110197315A (en) * | 2018-04-08 | 2019-09-03 | 腾讯科技(深圳)有限公司 | Methods of risk assessment, device and its storage medium |
CN110399302A (en) * | 2019-07-26 | 2019-11-01 | 中国工商银行股份有限公司 | Risk Identification Method, device, electronic equipment and the medium of Software Testing Project |
CN111209575A (en) * | 2018-11-22 | 2020-05-29 | 阿里巴巴集团控股有限公司 | Data protection method, generation method, transmission method, device and storage medium |
CN111724069A (en) * | 2020-06-22 | 2020-09-29 | 百度在线网络技术(北京)有限公司 | Method, apparatus, device and storage medium for processing data |
CN111737692A (en) * | 2020-08-17 | 2020-10-02 | 腾讯科技(深圳)有限公司 | Application program risk detection method and device, equipment and storage medium |
CN111753701A (en) * | 2020-06-18 | 2020-10-09 | 百度在线网络技术(北京)有限公司 | Violation detection method, device and equipment of application program and readable storage medium |
CN112214418A (en) * | 2020-12-04 | 2021-01-12 | 支付宝(杭州)信息技术有限公司 | Application compliance detection method and device and electronic equipment |
CN112462921A (en) * | 2019-09-09 | 2021-03-09 | 中兴通讯股份有限公司 | Application program management method, device and storage medium |
CN112513846A (en) * | 2018-05-22 | 2021-03-16 | 诺顿卫复客公司 | System and method for controlling application startup based on security policy |
CN112685737A (en) * | 2020-12-24 | 2021-04-20 | 恒安嘉新(北京)科技股份公司 | APP detection method, device, equipment and storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8209738B2 (en) * | 2007-05-31 | 2012-06-26 | The Board Of Trustees Of The University Of Illinois | Analysis of distributed policy rule-sets for compliance with global policy |
US9467465B2 (en) * | 2013-02-25 | 2016-10-11 | Beyondtrust Software, Inc. | Systems and methods of risk based rules for application control |
US11093535B2 (en) * | 2017-11-27 | 2021-08-17 | International Business Machines Corporation | Data preprocessing using risk identifier tags |
US20200285761A1 (en) * | 2019-03-07 | 2020-09-10 | Lookout, Inc. | Security policy manager to configure permissions on computing devices |
US20200387843A1 (en) * | 2019-06-08 | 2020-12-10 | Trustarc Inc | Risk management of processes utilizing personal data |
-
2021
- 2021-06-16 CN CN202110669011.9A patent/CN113254932B/en active Active
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034058A (en) * | 2010-11-25 | 2011-04-27 | 中国联合网络通信集团有限公司 | Method for controlling safety of application software and terminal |
CN104838630A (en) * | 2012-10-10 | 2015-08-12 | 思杰系统有限公司 | Policy-based application management |
EP2907290A1 (en) * | 2012-10-10 | 2015-08-19 | Citrix Systems Inc. | Policy-based application management |
CN102917346A (en) * | 2012-10-17 | 2013-02-06 | 浙江大学城市学院 | Security policy management system and method for Android-based application program during operation |
WO2014133528A1 (en) * | 2013-02-28 | 2014-09-04 | Hewlett-Packard Development Company, L.P. | Determining coverage of dynamic security scans using runtime and static code analyses |
CN104216785A (en) * | 2014-08-26 | 2014-12-17 | 烽火通信科技股份有限公司 | Common policy task system and implementing method thereof |
CN108509796A (en) * | 2017-02-24 | 2018-09-07 | 中国移动通信集团公司 | A kind of detection method and server of risk |
CN110197315A (en) * | 2018-04-08 | 2019-09-03 | 腾讯科技(深圳)有限公司 | Methods of risk assessment, device and its storage medium |
CN112513846A (en) * | 2018-05-22 | 2021-03-16 | 诺顿卫复客公司 | System and method for controlling application startup based on security policy |
CN109190374A (en) * | 2018-07-25 | 2019-01-11 | 安徽三实信息技术服务有限公司 | Application software safety detecting system and detection method in a kind of application system |
CN111209575A (en) * | 2018-11-22 | 2020-05-29 | 阿里巴巴集团控股有限公司 | Data protection method, generation method, transmission method, device and storage medium |
CN110198313A (en) * | 2019-05-23 | 2019-09-03 | 新华三信息安全技术有限公司 | A kind of method and device of strategy generating |
CN110399302A (en) * | 2019-07-26 | 2019-11-01 | 中国工商银行股份有限公司 | Risk Identification Method, device, electronic equipment and the medium of Software Testing Project |
CN112462921A (en) * | 2019-09-09 | 2021-03-09 | 中兴通讯股份有限公司 | Application program management method, device and storage medium |
CN111753701A (en) * | 2020-06-18 | 2020-10-09 | 百度在线网络技术(北京)有限公司 | Violation detection method, device and equipment of application program and readable storage medium |
CN111724069A (en) * | 2020-06-22 | 2020-09-29 | 百度在线网络技术(北京)有限公司 | Method, apparatus, device and storage medium for processing data |
CN111737692A (en) * | 2020-08-17 | 2020-10-02 | 腾讯科技(深圳)有限公司 | Application program risk detection method and device, equipment and storage medium |
CN112214418A (en) * | 2020-12-04 | 2021-01-12 | 支付宝(杭州)信息技术有限公司 | Application compliance detection method and device and electronic equipment |
CN112685737A (en) * | 2020-12-24 | 2021-04-20 | 恒安嘉新(北京)科技股份公司 | APP detection method, device, equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
王晖 等.基于移动应用安全风险分析与策略的研究.《电子技术与软件工程》.2019,第148卷(第2期),180-181. * |
Also Published As
Publication number | Publication date |
---|---|
CN113254932A (en) | 2021-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9298926B2 (en) | Remediation of security vulnerabilities in computer software | |
CN113705362B (en) | Training method and device of image detection model, electronic equipment and storage medium | |
CN113704102B (en) | Application program compliance detection method, device, equipment and medium | |
CN113360580A (en) | Abnormal event detection method, device, equipment and medium based on knowledge graph | |
CN113963186A (en) | Training method of target detection model, target detection method and related device | |
CN113627526A (en) | Vehicle identification recognition method and device, electronic equipment and medium | |
CN113254932B (en) | Application risk detection method and device, electronic equipment and medium | |
CN114924959A (en) | Page testing method and device, electronic equipment and medium | |
CN112508005B (en) | Method, apparatus, device and storage medium for processing image | |
CN113469732A (en) | Content understanding-based auditing method and device and electronic equipment | |
CN113643260A (en) | Method, apparatus, device, medium and product for detecting image quality | |
CN113495825A (en) | Line alarm processing method and device, electronic equipment and readable storage medium | |
KR102209577B1 (en) | System and method of analyzing risks of patent infringement | |
CN114492370B (en) | Webpage identification method, webpage identification device, electronic equipment and medium | |
CN113360672B (en) | Method, apparatus, device, medium and product for generating knowledge graph | |
CN113010721B (en) | Picture auditing method and device, electronic equipment and storage medium | |
CN114492364A (en) | Same vulnerability judgment method, device, equipment and storage medium | |
CN114093006A (en) | Training method, device and equipment of living human face detection model and storage medium | |
CN113221035A (en) | Method, apparatus, device, medium, and program product for determining an abnormal web page | |
CN114724370B (en) | Traffic data processing method, device, electronic equipment and medium | |
CN113239296B (en) | Method, device, equipment and medium for displaying small program | |
CN114677564B (en) | Training sample generation method, deep learning model training method and device | |
CN111143149B (en) | Method and device for back displaying request data, computer equipment and storage medium | |
JP7247497B2 (en) | Selection device and selection method | |
US20240202345A1 (en) | Attack scenario generation apparatus, attack scenario generation method, and computer readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |