CN117312228A - Computing device and data protection method - Google Patents

Computing device and data protection method Download PDF

Info

Publication number
CN117312228A
CN117312228A CN202210704496.5A CN202210704496A CN117312228A CN 117312228 A CN117312228 A CN 117312228A CN 202210704496 A CN202210704496 A CN 202210704496A CN 117312228 A CN117312228 A CN 117312228A
Authority
CN
China
Prior art keywords
target
address
memory space
processor
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210704496.5A
Other languages
Chinese (zh)
Inventor
闵新�
齐元吉·查克拉博蒂
周海林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202210704496.5A priority Critical patent/CN117312228A/en
Priority to PCT/CN2023/094161 priority patent/WO2023246373A1/en
Publication of CN117312228A publication Critical patent/CN117312228A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/167Interprocessor communication using a common memory, e.g. mailbox
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Storage Device Security (AREA)

Abstract

A computing device comprising a first processor having a target APP and a first driver disposed thereon and a second processor having a first protection unit disposed therein; the first processor is used for sending a first message for indicating to execute a target model corresponding to the target APP to the second processor through a first drive, wherein the first message comprises a first memory space address of input data and a second memory space address of output data corresponding to the target model; the second processor is configured to determine, by the first protection unit, validity of the first memory space address and the second memory space address, and to execute the target model when the first protection unit determines that both the first memory space address and the second memory space address are valid, or to limit execution of the target model when the first protection unit determines that at least one of the first memory space address and the second memory space address is not valid. Thus, in the process of APP data processing related to AI, the privacy data of the user is effectively protected.

Description

Computing device and data protection method
Technical Field
The present application relates to the field of artificial intelligence (artificial intelligence, AI) technology, and in particular, to a computing device and a data protection method.
Background
With the continuous development of AI technology, more and more terminal devices are deployed with Applications (APP) related to AI, such as: finance class APP, cloud storage class APP, etc. The intelligent recognition and detection of video, image or voice can be realized through the APP related to AI. Wherein, AI-related APP can be involved in processing the user's private data during the data identification and detection process. However, at present, people have a higher awareness of protecting privacy data, so how to effectively protect privacy data of users in the process of processing data by using an APP related to AI is a calculation problem that needs to be solved at present.
Disclosure of Invention
The application provides a computing device, a data protection method, a computer storage medium, a computer product and a chip, which can effectively protect privacy data of a user in the APP data processing process related to AI.
In a first aspect, the present application provides a computing device comprising a first processor having a target Application (APP) and a first driver disposed thereon, and a second processor having a first protection unit disposed therein. The first processor is used for sending a first message to the second processor through a first drive, the first message is used for indicating to execute a target model corresponding to the target APP, and the first message comprises a first memory space address of input data and a second memory space address of output data corresponding to the target model. The second processor is used for determining the legitimacy of the first memory space address and the second memory space address through the first protection unit. The second processor is further configured to execute the target model when the first protection unit determines that both the first memory space address and the second memory space address are legal, or to limit execution of the target model when the first protection unit determines that at least one of the first memory space address and the second memory space address is illegal. Illustratively, the first processor may be a CPU, the second processor may be an NPU, the first protection unit may be an AI protection unit 123 described below, and the first driver may be an AI driver 111 described below. Thus, the computing device moves the AI protection check from the CPU to the NPU, changes the software check with frequent switching and poor performance into the hardware check with real-time protection and higher performance, and reduces the workload of the CPU. Wherein, the related data is checked before the NPU performs AI processing. In addition, no additional encryption and decryption protection operation is needed.
In one possible implementation, the second processor is further specifically configured to: determining whether the target model can access a target address through a first protection unit, wherein the target address is a first memory space address or a second memory space address; when the target model cannot access the target address, determining that the target address is illegal. Thereby avoiding leakage of private data of the user.
In one possible implementation, the second processor is specifically configured to: determining whether a target address exists in a preconfigured first attribute list through a first protection unit, wherein the target address is a first memory space address or a second memory space address, and the first attribute list comprises one or more legal memory space addresses and the data type of data stored in each legal memory space address; and when the target address does not exist in the first attribute list, determining that the target address is illegal.
In one possible implementation, the second processor is further specifically configured to: when the target model can access the target address or the target address exists in the first attribute list, determining a first type of data stored in the target address from the first attribute list through a first protection unit; inquiring a preconfigured model rule list through a first protection unit, and determining a first rule list corresponding to a target model, wherein the model rule list comprises target input data and target output data required by each model and usage rules followed by the target input data and the target output data; and determining whether the target address is legal or not through the first protection unit according to the first rule list and the data type corresponding to the target address in the first attribute list.
In one possible implementation, the model rule list includes a data type mask, which is composed of at least one bit, where each bit is used to indicate one data.
In a second aspect, the present application provides a data protection method that may be applied to a computing device. The computing device includes a first processor having a target Application (APP) and a first driver disposed thereon, and a second processor having a first protection unit disposed therein. The method may include: the first processor sends a first message to the second processor through a first drive, wherein the first message is used for indicating to execute a target model corresponding to the target APP, and the first message comprises a first memory space address of input data and a second memory space address of output data corresponding to the target model; the second processor determines the legitimacy of the first memory space address and the second memory space address through the first protection unit; the second processor executes the target model when the first protection unit determines that both the first memory space address and the second memory space address are legal, or restricts execution of the target model when the first protection unit determines that at least one of the first memory space address and the second memory space address is illegal.
In one possible implementation manner, the second processor determines validity of the first memory space address and the second memory space address through the first protection unit, and specifically includes: the second processor determines whether the target model can access a target address through the first protection unit, wherein the target address is a first memory space address or a second memory space address; when the target model cannot access the target address, the second processor determines that the target address is illegal.
In one possible implementation manner, the second processor determines validity of the first memory space address and the second memory space address through the first protection unit, and specifically includes: the second processor determines whether a target address exists in a preconfigured first attribute list through a first protection unit, wherein the target address is a first memory space address or a second memory space address, and the first attribute list comprises one or more legal memory space addresses and the data type of data stored in each legal memory space address; when the target address does not exist in the first attribute list, the second processor determines that the target address is illegal.
In one possible implementation, the method further includes: when the target model can access the target address or the target address exists in the first attribute list, the second processor determines a first type of data stored in the target address from the first attribute list through the first protection unit; the second processor queries a preconfigured model rule list through the first protection unit to determine a first rule list corresponding to the target model, wherein the model rule list comprises target input data and target output data required by each model and usage rules followed by the target input data and the target output data; the second processor determines whether the target address is legal or not through the first protection unit according to the first rule list and the data type corresponding to the target address in the first attribute list.
In one possible implementation, the model rule list includes a data type mask, which is composed of at least one bit, where each bit is used to indicate one data.
In a third aspect, the present application provides a computer readable storage medium storing a computer program which, when run on a processor, causes the processor to perform the method described in the second aspect or any one of the possible implementations of the second aspect.
In a fourth aspect, the present application provides a computer program product which, when run on a processor, causes the processor to perform the method described in the second aspect or any one of the possible implementations of the second aspect.
In a fifth aspect, the present application provides a chip comprising at least one processor and an interface; at least one processor obtains program instructions or data through an interface; at least one processor is configured to execute program line instructions to implement the method described in the second aspect or any one of the possible implementations of the second aspect.
It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Drawings
FIG. 1 is a schematic diagram of a computing device provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of another computing device provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a model registry and a model rule list provided by an embodiment of the present application;
fig. 4 is a schematic diagram of a buffer attribute list provided in an embodiment of the present application;
fig. 5 is a schematic flow chart of a data protection method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
The term "and/or" herein is an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. The symbol "/" herein indicates that the associated object is or is a relationship, e.g., A/B indicates A or B.
The terms "first" and "second" and the like in the description and in the claims are used for distinguishing between different objects and not for describing a particular sequential order of objects. For example, the first response message and the second response message, etc. are used to distinguish between different response messages, and are not used to describe a particular order of response messages.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise specified, the meaning of "a plurality of" means two or more, for example, a plurality of processing units means two or more processing units and the like; the plurality of elements means two or more elements and the like.
Illustratively, the data volume of the APP processing related to AI is large, so that it is difficult to meet the requirements of the operation speed and the power consumption by using a general-purpose central processing unit (central processing unit, CPU) alone. Generally, a processor that is adept at processing massive amounts of data may be used, such as: a neural network processor (neural network processing unit, NPU) or the like, in cooperation with the CPU, processes data.
By way of example, FIG. 1 illustrates a hardware architecture of a computing device. As shown in fig. 1, the computing device 100 includes: CPU110, NPU120, and memory 130. The CPU110, NPU120, and memory 130 may be connected by, but are not limited to, a bus.
Two execution environments may be included in CPU110, a rich execution environment (rich execution environment, REE) and a trusted execution environment (trusted execution environment, TEE), respectively. One or more APPs may be configured in the REEs. An AI driver (AI driver) 111 may be configured in the TEE. The AI driver 111 may be, but is not limited to, used to provide a drive to the NPU.
An NPU controller 121 and an arithmetic logic unit 122 may be included in the NPU120. The NPU controller 121 may be, but is not limited to, for receiving the AI model transmitted by the CPU110, scheduling the AI model and performing execution based on the prepared input data, feeding back the execution result of the AI model to the CPU110, and the like. The arithmetic logic unit 122 may be, but is not limited to, configured to receive an execution command issued by the NPU controller 121, execute the AI model according to the execution command, feed back an execution result to the NPU controller 121, and the like.
The memory 130 may include two types of storage spaces, namely, an unsecure storage space 131 and a secure storage space 133. Wherein the non-secure storage space 131 is mainly used for storing non-secure data (i.e., data other than private data). The secure storage space 133 is mainly used to store secure data (i.e., private data).
During operation, computing device 100 may first initialize an APP on CPU110 and invoke a Service interface of an AI Service (not shown in fig. 1) on CPU110, such as: and the AI model loading interface sends an AI model file which can be identified by the AI Service to the AI Service, the AI Service analyzes the AI model, and the analyzed AI model file is converted into a format which can be processed by the NPU120. Thereafter, the AI Service may invoke a user-state interface of the AI driver 111 on the CPU110 via the inter-core communication interface of the REE-TEE to load the AI model into the NPU120 and save the AI model by the NPU120. In addition, a successful loading result can be returned to the APP through the AI driver 111 and the AI Service in sequence.
After the AI model is successfully loaded and the input data is ready (for example, the camera completes one frame of image output), the AI driver 111 can be called by the APP issue command (i.e., the AI model execution command) to perform AI model reasoning. After acquiring the command issued by the APP, the AI driver 111 may check the memory space address of the input data and the memory space address of the output data to determine whether the two memory space addresses are legal. Upon detecting that both memory space addresses are valid, the AI driver 111 can send an execute command and the associated memory space address to the NPU120. Finally, the NPU120 can perform the relevant inference tasks, and after performing the relevant inference tasks, the resulting execution results can be returned to the APP in the CPU110 through the AI driver 111. By way of example, input data ready means that data for AI model reasoning is obtained, for example, in the case of an intelligent vehicle, each frame of image captured by the camera is automatically stored in the memory 130, and each time an image is stored therein, the APP issues an execution command to the NPU instructing the NPU to invoke the image from the memory 130 for AI model reasoning, and/or to store the corresponding reasoning result in the memory 130.
In the computing device 100 shown in fig. 1, since the APP in the CPU110 is deployed in the re, the AI driver 111 is deployed in the TEE. Therefore, each time the AI operation is started, the CPU110 needs to switch from the REE to the TEE to process, increasing the processing time and bringing about a significant performance overhead. In addition, since AI computation is usually performed in units of data frames, each time a data frame is processed, an operation of switching from REE to TEE needs to be initiated in the CPU110, which results in very slow processing performance and affects the efficiency of AI processing. In addition, if the AI driver 111 is deployed in the re in the CPU110, a malicious APP easily alters data in the re, and there is a safety hazard of private data leakage.
In view of this, embodiments of the present application provide another computing device that moves the examination of AI protection from a CPU to an NPU, changing from frequent switching, poorly performing software examination to real-time protection, higher performance hardware examination, while reducing the CPU workload. Wherein, the related data is checked before the NPU performs AI processing. In addition, no additional encryption and decryption protection operation is needed.
For example, referring to FIG. 2, FIG. 2 illustrates another hardware configuration of a computing device. Wherein the computing device 200 shown in fig. 2 differs from the computing device 100 shown in fig. 1 primarily in that: the NPU120 of the computing device 200 shown in fig. 2 has an AI protection unit 123 deployed therein. In the computing device 200 in fig. 2, the AI protection unit 123 checks the data such as the memory space address of the input data and the memory space address of the output data; in the computing device 100 in fig. 1, the AI driver 111 checks the data such as the memory space address of the input data and the memory space address of the output data. For other components in the computing device 200 shown in fig. 2, see the foregoing description related to fig. 1 for details, which are not repeated here.
In addition, in the computing device 200 shown in fig. 2, the AI driver 111 may continue to be deployed in the TEE of the CPU110, or may be deployed in the re of the CPU 110. In other words, in the computing device 200 shown in fig. 2, there is no requirement for the deployment environment of the AI driver 111.
It should be understood that the structures illustrated by embodiments of the present application do not constitute a particular limitation on computing device 100 or 200. In other embodiments of the present application, computing device 100 or 200 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
In some embodiments, the computing device 200 may invoke the AI driver 111 via an APP issued command (i.e., an AI model execution command) to perform AI model reasoning during operation when the AI model is loaded successfully and after the input data is ready. The AI driver 111, after acquiring the command issued by the APP, may send the command to the AI protection unit 123. After the AI protection unit 123 obtains the command sent by the AI driver 111, it may check the memory space address of the input data and the memory space address of the output data to determine whether the two memory space addresses are legal. When both memory space addresses are detected to be legal, the AI protection unit 123 may send an execute command and an associated memory space address to the NPU controller 121. Finally, the NPU controller 121 may control the arithmetic logic unit 122 to perform the relevant inference task, and return the resulting execution result to the APP in the CPU110 after the relevant inference task is performed. For example, when the AI-protection unit 123 detects that at least one of the two memory space addresses is illegal, the AI-protection unit 123 may prohibit the NPU120 from performing the relevant task.
In some embodiments, during AI model loading associated with the APP deployed in CPU110, a model registry and a list of model rules may be created. By way of example, the model registry may be as shown in fig. 3 (a), and the model rule list may be as shown in fig. 3 (B). For the model registry, referring to fig. 3 (a), each model may have an address, a rule table offset, and a rule number. In addition, the model N shown in (a) of fig. 3 may be, but is not limited to be understood as an identification of the nth model, the rule table offset may refer to an offset of a first rule corresponding to the model relative to a first rule in the model rule list, and the number of rules may refer to a number of rules of the model in the model rule list. For example, with continued reference to FIG. 3, for model 2, the rule table offset may be 3 and the number of rules may be 2.
For the model rule list, referring to fig. 3 (B), the rule table corresponding to each model may include a data type mask corresponding to each input of the model, a data type mask corresponding to each output, and a table type corresponding to each data type mask. Wherein the data type mask corresponding to an input is used to indicate the data required for the input. The table type corresponding to a certain data type mask is used to indicate whether the data type mask is an allowed data type or a disallowed data type. For example, the data type mask may take the form of bits, each of which may indicate one data.
For example, assuming that a model has two inputs and two outputs, and the data of input 1 may be original video data or original picture data, the data of input 2 may only be original picture data, the data of output 1 may not be face feature data, and the data of output 2 may only be face feature data, the rule list corresponding to the model may be as shown in table 1. In table 1, a column on the left side is a data type mask corresponding to input and output, and the columns are sequentially from top to bottom: a data type mask for input 1, a data type mask for input 2, a data type mask for output 1, and a data type mask for output 2; the right hand column is a corresponding table type, where a table type of 0 may represent a white list (i.e., the data indicated by the corresponding data type mask is allowed data) and a table type of 1 may represent a black list (i.e., the data indicated by the corresponding data type mask is not allowed data). In table 1, the data type mask is in the form of bits, and "0b" indicates that binary is employed. For each data type mask, from left to right, each bit may indicate one data. Taking "0b0011" as an example, from right to left, the first bit may represent original video data, the second bit may represent original picture data, the third bit may represent face feature data, and the fourth bit may represent face coordinate information. Wherein when the value of the bit is 1, it indicates that the data corresponding to the bit is required, and when the value of the bit is 0, it indicates that the data corresponding to the bit is not required. With continued reference to table 1, for input 1, the data type mask is "0b0011" indicating that original video data and original picture data are required. In addition, since the table type corresponding to the data type mask "0b0011" is a white list, the data to be used indicated by the data type mask is permitted to be used, and other data is not permitted to be used. For output 1, its data type mask is "0b0100", indicating that face feature data is required. In addition, since the table type corresponding to the data type mask "0b0100" is a blacklist, all data required to be used indicated by the data type mask is not permitted to be used, and other data is permitted to be used.
TABLE 1
0b0011 0
0b0010 0
0b0100 1
0b0100 0
In addition, the CPU110 in the computing device 200 may also dynamically generate a buffer attribute list when allocating/freeing memory space for data. The buffer attribute list may include a start address of a buffer, a length of the buffer, and a data type of data stored in the buffer. For example, the buffer attribute list may be as shown in fig. 4, and in fig. 4, buffer N represents an nth buffer. In some embodiments, buffers may be understood as memory.
For example, assuming that the data type "0" represents the original video data, the data type "1" represents the original picture data, the data type "2" represents the face feature data, and the data type "3" represents the face coordinate information, the buffer attribute list may be as shown in table 2. In table 2, video represents original Video data, picture represents original Picture data, feature_data represents face Feature data, and Coordinate represents face Coordinate information; x_addr represents the start address of the buffer storing x, and x_size represents the length of the buffer storing x. The left column in table 2 represents the start address of a buffer storing certain data, the middle column represents the length of a buffer storing certain data, and the right column represents the data type of data stored in a buffer.
TABLE 2
Video_addr Video_size 0
Picture_addr Picture_size 1
Feature_data_addr Feature_data_size 2
Coordinate_addr Coordinate_size 3
Further, after the AI protection unit 123 in the NPU120 in the computing device 200 obtains the command sent by the AI driver 111, the AI protection unit 123 may query the data type of the input data and the data type of the output data in the buffer attribute list through the memory address and the length corresponding to each of the input data and the output data included in the command. When there is no memory address contained in the command in the buffer attribute list, the incoming memory address is considered to be illegal, and a result of failure in checking can be returned at this time. When the memory addresses contained in the commands exist in the buffer attribute list, the incoming memory addresses are considered legal, and then the subsequent operation can be executed.
Meanwhile, the AI protection unit 123 may also query the model registry through the identification of the required AI model contained in the command, and obtain the rule table offset and the rule number of the AI model. And obtaining the model rule table of the AI model by inquiring the model rule list through the inquired rule table offset and rule number of the AI model and the input and output sequence numbers corresponding to the AI model contained in the command. In some embodiments, after determining that the AI model to be used is needed, the input sequence number and the output sequence number corresponding to the AI model may be determined. The input sequence number and the output sequence number of the AI model may be, but not limited to, preset.
The AI protection unit 123 can then determine, by way of the table types in the model rule table of the AI model, whether the data type of the data marked by the corresponding data type mask is permitted or not permitted to be used. For example, when a table type marks a data type of the data marked by the data type mask corresponding to the table type using the whitelist mechanism, the data type of the data marked by the data type mask may be considered to be allowed to be used; when the table type marks the blacklist mechanism, the data type of the data marked by the data type mask corresponding to the table type may be considered unusable.
When the corresponding data type mask is allowed to be used, the address of the buffer corresponding to the data type of the data to be used marked in the data type mask is legal, namely the corresponding memory address is legal, and at the moment, the passing of the check is determined; otherwise, it is determined that the inspection failed. When the corresponding data type mask is not allowed to be used, the address of the buffer corresponding to the data type of the data which is marked in the data type mask and does not need to be used is legal, namely the corresponding memory address is legal, and at the moment, the checking is confirmed to pass; otherwise, it is determined that the inspection failed.
After the AI protection unit 123 checks that all memory addresses in the command sent by the AI driver 111 are legal, the AI protection unit 123 may control the NPU120 to execute the AI model to complete the corresponding data processing.
In some embodiments, in addition to the "buffer attribute list" shown in FIG. 4, a flag bit may be added to the MMU page table entry when mapping virtual and physical addresses by an MMU that functions similarly to the memory management unit (memory management unit, MMU) in CPU 110. The tag bit is used to tag what memory is in the memory space, so that the AI protection unit 123 performs validity check through the tag bit. In addition, different memory areas can be divided by other memory management modules in the CPU110, and the data type of the data stored in each memory area is marked; the AI-protection unit 123 may then perform a validity check based on the memory region.
In some embodiments, in addition to determining whether the memory addresses corresponding to the input data and the output data are legal using the model rule list in fig. 3 (B), a memory space that allows access to the AI model may be set for each of the inputs and outputs. In this way, when the AI protection unit 123 obtains the identifier of the AI model and the memory address of the corresponding data, it can directly check whether the corresponding memory address is accessed in the AI model, thereby completing the validity check.
In addition, in fig. 3 (B), in addition to the data type mask, different categories may be defined directly to indicate different access rights, for example, a category "0" indicates that all buffers are allowed to be accessed, a category "1" indicates that all original data is allowed to be accessed, a category "2" indicates that all private data is allowed to be accessed, and so on. For example, if the category "5" indicates that the original video data and the original picture data can be accessed, the category "6" indicates that the original picture data can be accessed, and the category "7" indicates that the face feature data can be accessed, the foregoing table 1 may be updated to the following table 3.
TABLE 3 Table 3
5 0
6 0
7 1
7 0
At this time, the AI-protection unit 123 can directly compare the corresponding data types to determine whether the corresponding memory address is legal.
Next, based on the foregoing, the embodiment of the present application further provides a data protection method.
For example, referring to fig. 5, fig. 5 illustrates a data protection method. It is to be appreciated that the method may be, but is not limited to being, performed by the computing device 200 depicted in fig. 2. It will be appreciated that the CPU and NPU described in fig. 5 may be replaced with other processors, and the replaced solution is still within the scope of the present application. As shown in fig. 5, the data protection method includes the steps of:
s501, the CPU sends a first message to an AI protection unit in the NPU through AI driving.
In this embodiment, after the APP deployed in the CPU initiates the task, the CPU may send the first message to the AI protection unit in the NPU by the AI driver. The first message may be used to indicate a target model corresponding to the execution target APP. The first message may include a first memory space address of input data and a second memory space address of output data of the object model.
S502, an AI protection unit in the NPU judges whether the first memory space address and the second memory space address are legal or not.
In this embodiment, the AI protection unit may determine whether the first memory space address and the second memory space address are legal through the foregoing checking method. Wherein when at least one of the two is illegal, S503 is performed; when both are legal, S504 is performed.
S503, the AI protection unit limits the NPU to execute the target model.
In this embodiment, when any one of the first memory space address and the second memory space address is illegal, it indicates that the private data is easy to leak at this time, so the AI protection unit limits the NPU to execute the target model.
S504, the AI protection unit controls the NPU to execute the target model.
In this embodiment, when the first memory space address and the second memory space address are both legal, it indicates that there is no risk of leakage of private data at this time, so the AI protection unit may control the NPU to execute the target model.
Therefore, the AI protection check is moved from the CPU to the NPU, so that the CPU side is not used for frequently switching the execution environment, and the hardware check is used on the NPU side, thereby realizing real-time protection and improving the performance.
Based on the method in the above embodiment, the present application provides a computer-readable storage medium storing a computer program, which when executed on a processor, causes the processor to perform the method in the above embodiment.
Based on the method in the above embodiment, the present application provides a computer program product, which is characterized in that the computer program product when run on a processor causes the processor to perform the method in the above embodiment.
Based on the method in the above embodiment, the embodiment of the present application further provides a chip. Referring to fig. 6, fig. 6 is a schematic structural diagram of a chip according to an embodiment of the present application. As shown in fig. 6, chip 600 includes one or more processors 601 and interface circuitry 602. Optionally, the chip 600 may also contain a bus 603. Wherein:
the processor 601 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 601 or instructions in the form of software. The processor 601 may be a general purpose processor, a digital communicator (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The methods and steps disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The interface circuit 602 may be used for transmitting or receiving data, instructions, or information, and the processor 601 may process using the data, instructions, or other information received by the interface circuit 602, and may transmit processing completion information through the interface circuit 602.
Optionally, chip 600 also includes memory, which may include read only memory and random access memory, and provides operating instructions and data to the processor. A portion of the memory may also include non-volatile random access memory (NVRAM).
Optionally, the memory stores executable software modules or data structures and the processor may perform corresponding operations by invoking operational instructions stored in the memory (which may be stored in an operating system).
Alternatively, the interface circuit 602 may be configured to output the execution result of the processor 601.
It should be noted that, the functions corresponding to the processor 601 and the interface circuit 602 may be implemented by a hardware design, a software design, or a combination of hardware and software, which is not limited herein.
It will be appreciated that the steps of the method embodiments described above may be performed by logic circuitry in the form of hardware in a processor or instructions in the form of software.
It should be understood that, the sequence number of each step in the foregoing embodiment does not mean the execution sequence, and the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way. In addition, in some possible implementations, each step in the foregoing embodiments may be selectively performed according to practical situations, and may be partially performed or may be performed entirely, which is not limited herein.
It is to be appreciated that the processor in embodiments of the present application may be a central processing unit (central processing unit, CPU), but may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits (application specific integrated circuit, ASIC), field programmable gate arrays (field programmable gate array, FPGA) or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. The general purpose processor may be a microprocessor, but in the alternative, it may be any conventional processor.
The method steps in the embodiments of the present application may be implemented by hardware, or may be implemented by a processor executing software instructions. The software instructions may be comprised of corresponding software modules that may be stored in random access memory (random access memory, RAM), flash memory, read-only memory (ROM), programmable ROM (PROM), erasable programmable PROM (EPROM), electrically erasable programmable EPROM (EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
It will be appreciated that the various numerical numbers referred to in the embodiments of the present application are merely for ease of description and are not intended to limit the scope of the embodiments of the present application.

Claims (13)

1.A computing device comprising a first processor having a target Application (APP) and a first driver disposed thereon and a second processor having a first protection unit disposed therein;
the first processor is configured to send a first message to the second processor through the first driver, where the first message is used to instruct to execute a target model corresponding to the target APP, and the first message includes a first memory space address of input data and a second memory space address of output data corresponding to the target model;
the second processor is configured to determine, by using the first protection unit, validity of the first memory space address and the second memory space address;
the second processor is further configured to execute the target model when the first protection unit determines that both the first memory space address and the second memory space address are legal, or limit execution of the target model when the first protection unit determines that at least one of the first memory space address and the second memory space address is illegal.
2. The computing device of claim 1, wherein the second processor is further specifically configured to:
determining, by the first protection unit, whether the target model can access a target address, where the target address is the first memory space address or the second memory space address;
and when the target model cannot access the target address, determining that the target address is illegal.
3. The computing device of claim 1, wherein the second processor is specifically configured to:
determining, by the first protection unit, whether a target address exists in a first attribute list configured in advance, where the target address is the first memory space address or the second memory space address, and the first attribute list includes one or more legal memory space addresses and a data type of data stored in each legal memory space address;
and when the target address does not exist in the first attribute list, determining that the target address is illegal.
4. A computing device according to claim 2 or 3, wherein the second processor is further specifically configured to:
when the target model can access the target address or the target address exists in the first attribute list, determining, by the first protection unit, a first type of data stored in the target address from the first attribute list;
querying a preconfigured model rule list through the first protection unit, and determining a first rule list corresponding to the target model, wherein the model rule list comprises target input data and target output data required by each model and usage rules followed by the target input data and the target output data;
and determining whether the target address is legal or not through the first protection unit according to the first rule list and the data type corresponding to the target address in the first attribute list.
5. The computing device of claim 4, wherein the list of model rules includes a data type mask, the data type mask being comprised of at least one bit, wherein each bit is used to indicate one data.
6. A data protection method, applied to a computing device, the computing device including a first processor and a second processor, the first processor having a target Application (APP) and a first driver disposed thereon, the second processor having a first protection unit disposed therein, the method comprising:
the first processor sends a first message to the second processor through the first driver, wherein the first message is used for indicating to execute a target model corresponding to the target APP, and the first message comprises a first memory space address of input data and a second memory space address of output data corresponding to the target model;
the second processor determines the legitimacy of the first memory space address and the second memory space address through the first protection unit;
the second processor executes the target model when the first protection unit determines that both the first memory space address and the second memory space address are legal, or restricts execution of the target model when the first protection unit determines that at least one of the first memory space address and the second memory space address is illegal.
7. The method of claim 6, wherein the second processor determines, by the first protection unit, validity of the first memory space address and the second memory space address, specifically comprising:
the second processor determines whether the target model can access a target address through the first protection unit, wherein the target address is the first memory space address or the second memory space address;
when the target model cannot access the target address, the second processor determines that the target address is illegal.
8. The method of claim 6, wherein the second processor determines, by the first protection unit, validity of the first memory space address and the second memory space address, specifically comprising:
the second processor determines whether a target address exists in a first preset attribute list through the first protection unit, wherein the target address is the first memory space address or the second memory space address, and the first attribute list comprises one or more legal memory space addresses and the data type of data stored in each legal memory space address;
when the target address does not exist in the first attribute list, the second processor determines that the target address is illegal.
9. The method according to claim 6 or 7, characterized in that the method further comprises:
when the target model can access the target address or the target address exists in the first attribute list, the second processor determines a first type of data stored in the target address from the first attribute list through the first protection unit;
the second processor queries a preconfigured model rule list through the first protection unit to determine a first rule list corresponding to the target model, wherein the model rule list comprises target input data and target output data required by each model and usage rules followed by the target input data and the target output data;
and the second processor determines whether the target address is legal or not through the first protection unit according to the first rule list and the data type corresponding to the target address in the first attribute list.
10. The method of claim 9, wherein the list of model rules includes a data type mask, the data type mask being comprised of at least one bit, wherein each bit is used to indicate one data.
11.A computer readable storage medium storing a computer program which, when run on a processor, causes the processor to perform the method of any one of claims 6-10.
12. A computer program product, characterized in that the computer program product, when run on a processor, causes the processor to perform the method according to any of claims 6-10.
13. A chip comprising at least one processor and an interface;
the at least one processor obtains program instructions or data through the interface;
the at least one processor is configured to execute the program line instructions to implement the method of any of claims 6-10.
CN202210704496.5A 2022-06-21 2022-06-21 Computing device and data protection method Pending CN117312228A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210704496.5A CN117312228A (en) 2022-06-21 2022-06-21 Computing device and data protection method
PCT/CN2023/094161 WO2023246373A1 (en) 2022-06-21 2023-05-15 Computing device and data protection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210704496.5A CN117312228A (en) 2022-06-21 2022-06-21 Computing device and data protection method

Publications (1)

Publication Number Publication Date
CN117312228A true CN117312228A (en) 2023-12-29

Family

ID=89254071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210704496.5A Pending CN117312228A (en) 2022-06-21 2022-06-21 Computing device and data protection method

Country Status (2)

Country Link
CN (1) CN117312228A (en)
WO (1) WO2023246373A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10068098B2 (en) * 2015-04-17 2018-09-04 Cicer One Technologies Inc. Data storage and access platform with jurisdictional control
CN109409105B (en) * 2018-09-30 2022-09-23 联想(北京)有限公司 Switching method, processor and electronic equipment
CN114641769A (en) * 2020-10-15 2022-06-17 华为技术有限公司 Safety measuring device and method for processor
CN112506847B (en) * 2021-02-04 2021-04-30 上海励驰半导体有限公司 Multiprocessor communication method and system

Also Published As

Publication number Publication date
WO2023246373A1 (en) 2023-12-28

Similar Documents

Publication Publication Date Title
CN108810006B (en) Resource access method, device, equipment and storage medium
US11605087B2 (en) Method and apparatus for identifying identity information
EP3761208B1 (en) Trust zone-based operating system and method
US8788787B2 (en) Systems, methods and architecture for facilitating software access to acceleration technology
US9392005B2 (en) System and method for matching pattern
CN109558372B (en) Apparatus and method for secure processor
US20190227918A1 (en) Method for allocating memory resources, chip and non-transitory readable medium
US11307925B2 (en) Systems and methods for isolating an accelerated function unit and/or an accelerated function context
JP2018190277A (en) Memory access controller and control method thereof
CN113853594A (en) Granular access control for secure memory
CN111885184A (en) Method and device for processing hot spot access keywords in high concurrency scene
CN117216758B (en) Application security detection system and method
CN112506676B (en) Inter-process data transmission method, computer device and storage medium
WO2021088744A1 (en) Capability management method and computer device
CN115114042A (en) Storage data access method and device, electronic equipment and storage medium
US20200334354A1 (en) Methods and devices for executing trusted applications on processor with support for protected execution environments
US7228400B2 (en) Control of multiply mapped memory locations
CN117312228A (en) Computing device and data protection method
CN112291212B (en) Static rule management method and device, electronic equipment and storage medium
US11301282B2 (en) Information protection method and apparatus
CN114936368A (en) Java memory Trojan detection method, terminal device and storage medium
US9652608B2 (en) System and method for securing inter-component communications in an operating system
CN113961366A (en) Kernel function calling method of operating system and computer equipment
JP4507569B2 (en) Information processing apparatus, information processing method, program, and recording medium
CN113051077A (en) User request processing method and device, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination