CN116484439B - Rust language-based safety enhancement model development method and system - Google Patents

Rust language-based safety enhancement model development method and system Download PDF

Info

Publication number
CN116484439B
CN116484439B CN202310750401.8A CN202310750401A CN116484439B CN 116484439 B CN116484439 B CN 116484439B CN 202310750401 A CN202310750401 A CN 202310750401A CN 116484439 B CN116484439 B CN 116484439B
Authority
CN
China
Prior art keywords
unsafe
function
internal
principle
pointer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310750401.8A
Other languages
Chinese (zh)
Other versions
CN116484439A (en
Inventor
董攀
江仁霜
黄辰林
丁滟
蹇松雷
谭郁松
李宝
任怡
王晓川
张建锋
谭霜
罗军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202310750401.8A priority Critical patent/CN116484439B/en
Publication of CN116484439A publication Critical patent/CN116484439A/en
Application granted granted Critical
Publication of CN116484439B publication Critical patent/CN116484439B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/71Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information
    • G06F21/74Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information operating in dual or compartmented mode, i.e. at least one secure mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/03Indexing scheme relating to G06F21/50, monitoring users, programs or devices to maintain the integrity of platforms
    • G06F2221/033Test or assess software

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer Security & Cryptography (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a security enhancement model development method and a system based on Rust language, which are designed by replacing equivalent functions: for two unsafe operations of accessing or modifying a variable static variable and a unit field, statically analyzing unsafe reasons, and replacing the two unsafe operations by using standard library functions and/or interfaces; internal unsafe package design: for unsafe operation which cannot be directly replaced, correctly packaging the unsafe operation which cannot be directly replaced in an internal unsafe function before the unsafe code is exposed to be unsafe; adding ACSL-like formalized design: the ACSL formalization protocol idea is used for encapsulating the obtained internal unsafe function, and ACSL-like formalization design verification is added for the internal unsafe encapsulation function, so that life cycle support and unsafe state range limitation collection are provided. The invention reduces the complexity of formal verification and improves the security of the Rust operating system.

Description

Rust language-based safety enhancement model development method and system
Technical Field
The invention relates to the technical field of computer operating systems, and particularly discloses a method and a system for developing a security enhancement model based on Rust language.
Background
While conventional system-level programming languages have memory and concurrent security problems, most memory-safe programming languages have significant runtime overhead, resulting in reduced operating efficiency. The Rust language essentially avoids many serious memory errors and concurrency errors by its unique security features (ownership and lifecycle). And the binary code compiled by Rust approximates the execution efficiency of the C code. Therefore, rust is the most popular secure programming language in recent years and is often used to build basic software such as an operating system and a browser. However, the existing Rust written project has still been found to suffer from a number of serious software drawbacks. For example, the RustSec advisury database has published hundreds of Rust vulnerabilities, including the problems of rule vulnerability, static-type-map non-maintenance, etc. Unsafe factors in the Rust language become bottlenecks for the security promotion of the underlying software (such as an operating system) developed by Rust.
First, the main reason for the defect in the Rust system level project is to contain unsafe codes, and the compiler cannot guarantee the safety of the codes. Specifically, rust contains two types of programming languages: safe Rust and Unsafe Rust. High-performance and Safe application programs and libraries can be written by using Safe Rust, and the Safe application programs and libraries are main components in the Rust project. The Unsafe Rust bypasses some security checks by using an 'Unsafe' keyword, performs some operations which are difficult to realize by the Safe Rust, and mainly comprises five functions: (1) dereferencing the bare pointer; (2) invoking an unsafe function or method; (3) accessing or modifying the variable static variable; (4) implementing unsafe track; (5) per field access unit. Because of the unique security mechanism (lifecycle and ownership) of the Rust language, programs written based entirely on Safe Rust can avoid memory errors, and the use of unsfe Rust is a major cause of system defects. However, from the point of view of the actual system design process, the unafe Rust code is unavoidable in the implementation process. The unavoidability of the Unsafe Rust conflicts with the system security, which makes security assurance of the operating system developed based on the Rust language difficult. Therefore, eliminating or reducing the use of the unafe Rust is an important point in improving the security of the system.
Second, most defects in the Rust system originate from the Unsafe Rust code, but the interaction of Safe Rust and Unsafe Rust also exacerbates the Unsafe nature of the Rust system. Many of the memory security issues in Rust are due to misunderstanding and use of life cycles, such as Memory Life Cycle (MLC) errors. When the unafe Rust interacts with the Safe Rust, the complexity of life cycle analysis and use is significantly increased, and system defects are more likely to be caused. From the nature of the Rust language design, rust uses only the "un-Safe" key to distinguish Safe from Unsafe Rust, and does not isolate Safe and Unsafe operations. Interaction of the unaafe Rust and the Safe Rust is unavoidable because of the unavoidable operation of the unaafe Rust, and the security problem caused by the interaction is unavoidable. Therefore, effectively reducing interaction of the uninform Rust and the Safe Rust is also a key to improving system security.
Finally, to increase the security of software developed in the Rust language, the system may be validated in a formalized way. Formalization is a mathematical method to prove that there is no defect in the system, thereby completely eliminating the unsafe nature of the unsafe code. Because the unsafe code quantity of the operating system to be verified is huge, the boundary is wide, the dependency among kernel codes is strong, the formal verification state space explodes, the capability of engineers is highly depended, and the problems of high verification cost and low efficiency exist. Therefore, the formalized method can only complete verification of a part of subsets of the Rust system, and cannot guarantee the security of the whole system. For other Rust system test schemes, it can be seen that both static analysis and dynamic detection are made. However, due to factors such as short Rust development period, the existing defect detection schemes have certain disadvantages. For example, the accuracy of the fuzzy test depends on the coverage rate of branches, and the problems of high complexity, complex structure of routine sequences and the like exist; static analysis can only analyze specific problems and has false negative and false positive problems.
In general, the existing method of Rust language does not effectively, safely and easily solve the security problem. Reducing the use of unsafe and reducing interactions between the secure code and the unsafe code is critical to reducing the complexity of formal verification.
Minimizing the unsafe code and reducing the interaction of the unsafe code and the safe code from the standpoint of unsafe existence factors of the Rust system, thereby reducing the complexity of formal verification is a key to improving the safety of the Rust system. Thus, the greatest challenge in operating system security assurance based on Rust development is the need to reduce the Unsafe of the onsafe Rust itself and of interacting with Safe Rust. This requires the operating system to be designed to: 1) Eliminating or reducing the use of unsafe codes; 2) Unintended interactions between secure and non-secure codes are minimized. Therefore, there is a need to design a new architecture to reduce the use of unsafe codes in a flexible way and to effectively isolate safe and unsafe codes. In order to improve the development efficiency of the operating system and meet the requirements of safety and expansibility, the new structure preferably has the following characteristics: 1) Functionally equivalent substitutions. The use of functionally equivalent operations to replace Unsafe codes essentially directly reduces the use of the unnsafe Rust. 2) An internally unsafe encapsulation. For unavoidable Unsafe operations, it is encapsulated inside a secure structure, thereby reducing the Unsafe reach and interaction with the secure code, the Unsafe nature of the indirect unafe Rust. 3) Formalizing the specifications. The formalization specification is used to limit the set of Unsafe state spaces, thereby reducing the Unsafe of the onsafe Rust, as well as the complexity of formalization verification.
In fact, such a design helps to reduce the scope of the unsafe code of the system, and the unsafe nature of interacting with the safe code. Since the dependency between the code amount and the kernel code is a main cause of complexity of formal verification, as the amount of unsafe code and the degree of interaction are reduced, the complexity of formal verification is reduced, and thus possibility is provided for designing a truly safe operating system. Such a new structural model is different from the existing one and requires a new security enhancement model to be designed.
Therefore, a plurality of unsafe factors existing in the conventional Rust system are technical problems to be solved in the prior art.
Disclosure of Invention
The invention provides a security enhancement model development method and system based on Rust language, which aim to solve the technical problems of a plurality of unsafe factors existing in the conventional Rust system.
One aspect of the invention relates to a method for developing a security enhancement model based on Rust language, comprising the following steps:
equivalent functional replacement design: according to a first principle set in advance, aiming at two unsafe operations of accessing or modifying a variable static variable and a unit field, statically analyzing unsafe reasons, and replacing the two unsafe operations by using standard library functions and/or interfaces;
Internal unsafe package design: according to a second principle set in advance, for unsafe operation which cannot be directly replaced, correctly packaging the unsafe operation which cannot be directly replaced in an internal unsafe function before the unsafe code is exposed to be unsafe;
adding ACSL-like formalized design: according to a third principle set in advance, the ACSL formalization protocol idea is used for packaging the obtained internal unsafe function, and ACSL-like formalization design verification is added for the internal unsafe packaging function, so that life cycle support and unsafe state range limitation collection are provided.
Further, the steps of equivalent functional replacement design include:
substitution of variable static variable types: the intelligent pointer Mutex in Rust is used to replace the use of variable static variables; meanwhile, the compiler runs borrowing detection on the intelligent pointer Mutex, and if operation violating borrowing rules is identified to be used and the system is compiled, the operation can pass; if the operation is identified to be used against the borrowing rule and the system is running, the operation is panic and exits;
substitution of the unit type: optimizing operations in the unit using automated feature detection and replacement based methods; by detecting the use of the units, the units type is replaced with struct keywords.
Further, the step of internally unsafe package design includes:
encapsulation of dereferencing bare pointers: the method comprises the steps of packaging the function of a dereferencing bare pointer by using an internal unsafe function through adjusting the architecture of unsafe codes, converting a dereferencing bare pointer function block wrapped by an unsafe key into a library function with safe dispatching, and performing the operation of the dereferencing bare pointer by using the internal unsafe function inside the library function;
encapsulation of unsafe functions and track: and (3) carrying out packaging optimization design on the unsafe function and the track, adjusting the unsafe range in the unsafe function block according to the internal unsafe principle, and replacing by using the internal unsafe block.
Further, the step of adding ACSL-like formalized designs includes:
dereferencing the bare pointer: explicitly annotating the lifecycle of the bare pointer using' static; using the assertion to restrict the state of the program running to the point, inserting the assertion before dereferencing the bare pointer, judging the validity of the memory when the program is executed to the point, and judging that the code running before the assertion is a qualified code if the assertion code is identified to be correct;
unsafe function and track: annotating the function by adding appropriate pre-and post-conditions to the packaged unsafe function; before the function is called, using the assertion to detect whether the state provided by the current calling environment meets the requirement; in the event that the preconditions are satisfied, assertions are also used to detect the post-condition notes function return state.
Further, the first principle is a functionally equivalent substitution principle, which is used for substituting the unsafe operation of the limited class by using the functionally equivalent safe Rust function; the second principle is an internal unsafe encapsulation principle for encapsulating unavoidable and irreplaceable unsafe operations with internal unsafe; the third principle is to add an ACSL-like formal design verification principle for limiting the unsafe state set in the internal unsafe function of the package by using ACSL formal protocol ideas.
Another aspect of the invention relates to a Rust language-based security enhancement model development system, comprising:
the equivalent function substitution design module is used for carrying out static analysis on unsafe reasons according to a first principle set in advance aiming at accessing or modifying two unsafe operations of a variable static variable and a unit field, and replacing the two unsafe operations by using standard library functions and/or interfaces;
the internal unsafe encapsulation design module is used for correctly encapsulating unsafe operation which cannot be directly replaced in an internal unsafe function before unsafe codes are exposed to be unsafe according to a second principle which is set in advance;
And the ACSL-like formalized design module is used for adding ACSL-like formalized design verification for the internal unsafe encapsulation function by using the ACSL formalized protocol idea in the internal unsafe function obtained by encapsulation according to a third preset principle, and providing life cycle support and limiting unsafe state range set.
Further, the equivalent functional replacement design module includes:
a substitution unit of variable static variable type for substituting the use of variable static variable by using intelligent pointer Mutex in Rust; meanwhile, the compiler runs borrowing detection on the intelligent pointer Mutex, and if operation violating borrowing rules is identified to be used and the system is compiled, the operation can pass; if the operation is identified to be used against the borrowing rule and the system is running, the operation is panic and exits;
a substitution unit of the unit type for optimizing operations in the unit using an automated feature detection and substitution-based method; by detecting the use of the units, the units type is replaced with struct keywords.
Further, the internal unsafe package design module includes:
the package unit of the dereferencing bare pointer is used for packaging the function of the dereferencing bare pointer by using an internal unsafe function through adjusting the architecture of unsafe codes, converting the dereferencing bare pointer function block wrapped by the unsecured key into a safe dispatching library function, and performing the operation of the dereferencing bare pointer by using the internal unsafe function inside the library function;
And the packaging unit of the unsafe function and the track is used for carrying out packaging optimization design on the unsafe function and the track, adjusting the unsafe range in the unsafe function block according to the internal unsafe principle, and replacing the unsafe function block by using the internal unsafe block.
Further, the adding ACSL-like formalized design module includes:
a dereferencing bare pointer unit for explicitly labeling a life cycle of the bare pointer using' static; using the assertion to restrict the state of the program running to the point, inserting the assertion before dereferencing the bare pointer, judging the validity of the memory when the program is executed to the point, and judging that the code running before the assertion is a qualified code if the assertion code is identified to be correct;
an unsafe function and track unit for annotating the function with appropriate pre-and post-conditions for the packaged unsafe function; before the function is called, using the assertion to detect whether the state provided by the current calling environment meets the requirement; in the event that the preconditions are satisfied, assertions are also used to detect the post-condition notes function return state.
Further, the first principle is a functionally equivalent substitution principle, which is used for substituting the unsafe operation of the limited class by using the functionally equivalent safe Rust function; the second principle is an internal unsafe encapsulation principle for encapsulating unavoidable and irreplaceable unsafe operations with internal unsafe; the third principle is to add an ACSL-like formal design verification principle for limiting the unsafe state set in the internal unsafe function of the package by using ACSL formal protocol ideas.
The beneficial effects obtained by the invention are as follows:
the invention provides a method and a system for developing a security enhancement model based on Rust language, which are designed by replacing equivalent functions: according to a first principle set in advance, aiming at two unsafe operations of accessing or modifying a variable static variable and a unit field, statically analyzing unsafe reasons, and replacing the two unsafe operations by using standard library functions and/or interfaces; internal unsafe package design: according to a second principle set in advance, for unsafe operation which cannot be directly replaced, correctly packaging the unsafe operation which cannot be directly replaced in an internal unsafe function before the unsafe code is exposed to be unsafe; adding ACSL-like formalized design: according to a third principle set in advance, the ACSL formalization protocol idea is used for packaging the obtained internal unsafe function, and ACSL-like formalization design verification is added for the internal unsafe packaging function, so that life cycle support and unsafe state range limitation collection are provided. According to the method and the system for developing the security enhancement model based on the Rust language, provided by the invention, the substitution-based, internal unsafe encapsulation and ACSL-form verification mechanism are combined by designing the security enhancement model, so that the purposes of minimizing unsafe codes and reducing interaction between unsafe codes and safe codes are achieved, the complexity of formal verification is reduced, and the security of a Rust operating system is improved. The specific advantages include:
1) The method optimizes the unsafe operation of limited classes, so that unsafe complexity in an operating system can be reduced, probability of false negative and false positive in program test is reduced, and safety and reliability of the system are greatly improved.
2) By combining substitution and internal unsafe encapsulation, the scope of unsafe code effects, as well as interactions between the safe and unsafe codes, can be effectively reduced. In addition, the reduction of the unsafe range simplifies formal verification, providing greater potential for designing safer operating systems.
3) The ACSL formalization protocol for the packaged internal unsafe function is used, so that unsafe state sets are limited, and unsafe possibility is reduced.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of a method for developing a security enhancement model based on Rust language;
FIG. 2 is a schematic view of a basic framework of the present invention;
FIG. 3 is a detailed flow chart of an embodiment of the steps of the equivalent functional alternative design shown in FIG. 1;
FIG. 4 is a schematic flow chart of an alternative access/modification variable static variable in accordance with the present invention;
FIG. 5 is a detailed flow chart of an embodiment of the steps of the internal unsafe package design shown in FIG. 1;
FIG. 6 is a flow diagram of dereferencing bare pointer optimization in accordance with the present invention;
FIG. 7 is a schematic flow chart of the optimization of the un-safe fn in the present invention;
FIG. 8 is a detailed flow chart of an embodiment of the steps of the incremental ACSL-like formalized design shown in FIG. 1;
FIG. 9 is a flow chart of verifying the correctness of a relevant code segment using assertions in the present invention;
FIG. 10 is a flow chart of an implementation of the present invention using a contract validation function;
FIG. 11 is a schematic diagram of the code used for internal unsafe functions in the standard library of the present invention;
FIG. 12 is a schematic diagram of the reference codes of the active level 4 page table of the present invention;
FIG. 13 is a schematic diagram of access modification variable static variable code according to the present invention;
FIG. 14 is a schematic diagram of the access variable static variable optimization process code of the present invention;
FIG. 15 is a schematic diagram of optimized active_level_4_table () function code according to the present invention;
FIG. 16 is a functional block diagram of one embodiment of a Rust language-based security enhancement model development system provided by the present invention;
FIG. 17 is a functional block diagram of an embodiment of the equivalent functional replacement design module shown in FIG. 16;
FIG. 18 is a functional block diagram of an embodiment of the internal unsafe package design module shown in FIG. 16;
FIG. 19 is a functional block diagram of an embodiment of the add-on ACSL-like formalized design module shown in FIG. 16;
FIG. 20 is a comparison chart of the unsafe operation before and after BlogOS optimization in the present invention.
Reference numerals illustrate:
10. the equivalent function replaces the design module; 20. an internal unsafe package design module; 30. adding an ACSL-like formalized design module; 11. a substitution unit of a variable static variable type; 12. a substitution unit of the unit type; 21. a packaging unit for dereferencing the bare pointer; 22. an encapsulation unit of unsafe functions and track; 31. dereferencing the bare pointer unit; 32. unsafe function and track unit.
Detailed Description
In order to better understand the above technical solutions, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
As shown in figures 1 and 2, the invention aims at the security problem of the operating system developed by the Rust language, and constructs a method and a system for developing a security enhancement model based on the Rust language, which minimizes unsafe codes and isolates safe and unsafe codes, so that the complexity of the security enhancement model is effectively and simply verified, and a truly safe operating system can be supported to be constructed. The invention discloses a method and a system for developing a security enhancement model of basic software based on Rust language, and a basic framework is shown in figure 2.
The present invention derives from the fact that two key design preconditions are: (1) Unsafe operation in the Rust language is of a limited class, i.e. there are only five classes of unsafe operation; (2) The internal unsafe is used, so that the safety can be effectively improved, and the state space to be formally verified is compressed. The present invention aims to flexibly, safely and efficiently manage five types of unsafe operations (i.e., dereferencing bare pointers, invoking unsafe functions, implementing unsafe tracks, accessing or modifying variable static variables and accessing unit fields). The invention optimizes the unsafe operation of the limited class on line by means of the combination of substitution and encapsulation, and three principles are required to be followed in the optimization process:
first principle: functionally equivalent alternatives. For unsafe operation of limited class, the invention firstly uses the safe Rust function with equivalent function to replace, thereby directly reducing unsafe operation.
Second principle: the interior is not securely encapsulated. For unavoidable and irreplaceable unsafe operations, the present invention encapsulates them with internal unsafe. This reduces the scope of the unsafe impact, thereby indirectly reducing the unsafe of the unsafe code.
Third principle: the ACSL-like formalized design verification is added. The ACSL formalized reduction concept is used to limit the set of unsafe states in the encapsulated internal unsafe function. This principle can provide life cycle support and verify partially internal unsafe conditions.
The invention is designed based on the principles, and the content and unsafe state space which need to be ensured and regulated in the development process are greatly reduced, so that the complexity of formal verification is reduced, the testing efficiency and the proving complexity are balanced, and the possibility is provided for designing a truly safe operating system.
Feature analysis is carried out on five unsafe operations in a limited way, and a safety enhancement model is designed by combining 3 proposed optimization principles: for Rust language, performing functional equivalent replacement on two unsafe operations of accessing or modifying a variable static variable and a unit field; the internal unsafe function is used to encapsulate the dereferencing bare pointers, invoking unsafe functions, and implementing unsafe tracks. For the development of system procedures using Rust, ACSL formalized verification is added to three types of methods for internal unsafe encapsulation.
Please refer to fig. 1, fig. 1 is a flow chart of an embodiment of a method for developing a security enhancement model based on a Rust language according to the present invention, wherein the method for developing a security enhancement model based on a Rust language includes the following steps:
Step S100, equivalent function substitution design: according to a first principle set in advance, aiming at two unsafe operations of accessing or modifying a variable static variable and a unit field, statically analyzing unsafe reasons, and replacing the two unsafe operations by using standard library functions and/or interfaces.
According to a first principle set in advance, aiming at two unsafe operations of accessing or modifying a variable static variable and a unit field, the unsafe reasons are analyzed statically, and standard library functions and/or interfaces are used for replacing the two unsafe operations, so that unsafe use is directly reduced. The first principle is a functionally equivalent substitution principle, which is used for substituting the functionally equivalent safe Rust function for unsafe operation of a limited class.
Step S200, internal unsafe package design: according to a second principle set in advance, for unsafe operation which cannot be directly replaced, the unsafe operation which cannot be directly replaced is correctly packaged in an internal unsafe function before the unsafe code is exposed to be unsafe.
According to a second principle set in advance, for unsafe operation which cannot be directly replaced, rust language programming specifications and demonstration researches indicate that internal unsafe is a good method for packaging unsafe codes, unsafe operation which cannot be directly replaced is correctly packaged in an internal unsafe function before the unsafe codes are exposed to be unsafe, and the theory is also verified in Rust language standard library/core library design, and a large number of library functions adopt an internal unsafe design model. By internal unsafe is meant a mode in which a function (or interface) implementation has a piece of unsafe code blocks inside that are marked with 'unsafe' keys, which can be seen as safe. Taking std:: sync:: mutex as an example, mutex provides a packaged lock () method, the design of lock () employs an internal insecure architecture, as shown in fig. 11.
The irreplaceable unsafe operation is encapsulated in terms of an internally unsafe form. The method is characterized in that the unreferenced bare pointer is used for calling unsafe functions and realizing unsafe track operation, the unsafe functions are packaged in a safe function according to an internal unsafe principle, a safe interface is provided for a developer, and the unsafe action range and interaction with a safe code are reduced. The second principle is, among other things, the internal unsafe encapsulation principle for encapsulating unavoidable and irreplaceable unsafe operations with internal unsafe conditions.
Step S300, adding ACSL-like formalized design: according to a third principle set in advance, the ACSL formalization protocol idea is used for packaging the obtained internal unsafe function, and ACSL-like formalization design verification is added for the internal unsafe packaging function, so that life cycle support and unsafe state range limitation collection are provided.
For the internal unsafe function obtained by using the second principle package, the interaction between the unsafe and the safe is reduced, and a better isolation environment is provided. However, the unsafe of the internal unsafe function is not further guaranteed. Therefore, according to a third principle set in advance, the ACSL formalization protocol idea is used for packaging the obtained internal unsafe function, and ACSL-like formalization design verification is added for the internal unsafe packaging function, so that life cycle support and unsafe state range limitation collection are provided. The third principle is to add an ACSL-like formalized design verification principle, and the third principle is to limit an unsafe state set in an internal unsafe function of the package by utilizing an ACSL formalized protocol idea.
Further, please refer to fig. 3, fig. 3 is a detailed flow chart of an embodiment of step S100 shown in fig. 1, in this embodiment, step S100 includes:
step S110, substitution of variable static variable types: the intelligent pointer Mutex in Rust is used to replace the use of variable static variables; meanwhile, the compiler runs borrowing detection on the intelligent pointer Mutex, and if operation violating borrowing rules is identified to be used and the system is compiled, the operation can pass; if the operation is identified as violating the borrowing rules for use and the system is running, then the panic may be exited.
The unsafe factors for accessing or modifying the variable static variables are mainly derived from the fact that the static variables are placed in the data segments at initialization, and the data in this area occupies these memory spaces all the time during the running of the whole program. Concurrent security issues such as read-after-write, write-after-read, write-write correlation, etc. may occur when multiple threads are accessing the same variable static variable, thereby causing data contention and resulting in deadlock of the system.
The intelligent pointer Mutex < T > in Rust is used instead of the use of a variable static variable (as shown in fig. 4). Mutex adopts the design principle of internal variability, i.e. internal values can be modified without itself being changed, functionally equivalent to accessing or modifying variable static variables. Meanwhile, the compiler can perform borrowing detection on Mutex, when operation violates borrowing rules, the operation can pass through when the system is compiling, but the operation can panic and exit, so that data competition is prevented, and memory safety is guaranteed. Overall, the goal of completely removing such unsafe operations is achieved by adding some competing overhead in the system.
Step S120, substitution of the unit type: optimizing operations in the unit using automated feature detection and replacement based methods; by detecting the use of the units, the units type is replaced with struct keywords.
The unit is similar in definition to struct, but the shared storage between fields (i.e., the same location stores different data types) is the main source of unsafe factors for accessing the unit field. Therefore, when the unit field is accessed, undefined fields in the memory may be accessed, resulting in a failure.
An operation in the unit is optimized using an automated feature detection and replacement based approach. Specifically, by detecting the use of a unit, the unit type is replaced with a struct key. Since all fields are not initialized during the unit initialization, each field needs to be assigned in the structure. The Default track needs to be implemented for the alternate struct type, with the uninitialized fields set by Default assignments. In general, using struct instead of un use directly reduces un-safe usage by adding a small amount of memory overhead, resulting in a reduction in system unsafe factors.
Preferably, please refer to fig. 5, fig. 5 is a detailed flow chart of an embodiment of step S200 shown in fig. 1, in this embodiment, step S200 includes:
step S210, packaging of dereferencing the bare pointer: the method comprises the steps of packaging the function of the dereferencing bare pointer by using an internal unsafe function through adjusting the architecture of unsafe codes, converting the dereferencing bare pointer function block wrapped by the unsafe key into a library function with safe dispatching, and performing the operation of the dereferencing bare pointer by using the internal unsafe function inside the library function.
For dereferencing bare pointer operations, the main source of the unsafe factor is that the certainty of the pointer address is not guaranteed, i.e. the bare pointer always points to a valid address, and there are cases where multiple bare pointers point to the same address space, even NULL. When the operation of dereferencing the bare pointer in Rust accesses an uncertain address, memory access errors may result. Furthermore, the bare pointer does not implement an automatic garbage collection mechanism, and automatic cleaning cannot be implemented, so undefined actions (double-free, use-after-free, etc.) may occur. For example, the pointer is used for dynamic space application, but when the pointer is not released after use, the applied memory block cannot be used by a program any more, and memory leakage and stack overflow are caused by the exhaustion of a memory pool (stack). Since Rust does not provide the bare pointer with the notion of a life cycle, i.e., the system cannot be borrowed, there is a problem with dangling references (i.e., freeing a piece of memory but retaining a pointer to that memory) when pointers beyond its life cycle are used. As shown in fig. 12, it is legal to create a bare pointer by calling the as_mut_ptr () method, and a separate security guarantee is required only when the bare pointer is operated.
The present embodiment encapsulates the function of dereferencing the bare pointer with an internal unsafe function by adjusting the architecture of the unsafe code, as shown in step_2 of fig. 6. Specifically, the unreferencing bare pointer function block wrapped by the 'unsefe' key is converted into a safe scheduling library function, and an internal unsafe function is used for carrying out the operation of dereferencing the bare pointer inside the library function.
Step S220, encapsulation of unsafe functions and track: and (3) carrying out packaging optimization design on the unsafe function and the track, adjusting the unsafe range in the unsafe function block according to the internal unsafe principle, and replacing by using the internal unsafe block.
Invoking an unsecure function or method is the primary representation of unsafe code. The function or method of insafe must be put into the insafe code block because the function itself is marked as insafe, meaning that there may be a risk of invoking it. Also, when the track contains a method that one or more compilers cannot verify its security, the track must be marked as an unsafe type. To implement the method in the unsafe track, an unsafe flag is first added before the key image. The code block cannot be verified by the compiler in a method of security, the security of which must be ensured by the developer himself. unsafe for the unsafe trace because the implementation method is the unsafe fn, the guarantee of the unsafe trace is equivalent to the reduction of the unsafe fn. When an operating system is designed, the whole unsafe function body is often regarded as a large unsafe block, so that the range of unsafe codes is enlarged, the codes are more dangerous, and unsafe operation is more difficult to find. In this embodiment, the optimization design of the package of the unsafe function and the track adjusts the unsafe range in the unsafe function block according to the internal unsafe principle, and uses the internal unsafe block to replace (as shown in step_2 in fig. 7).
Further, referring to fig. 8, fig. 8 is a detailed flow chart of an embodiment of step S300 shown in fig. 1, and in this embodiment, step S300 includes:
step S310, dereferencing the bare pointer: explicitly annotating the lifecycle of the bare pointer using' static; and using the assertion to restrict the state of the program running to the point, inserting the assertion before dereferencing the bare pointer, judging the validity of the memory when the program is executed to the point, and judging that the code running before the assertion is qualified code if the assertion code is identified to be correct.
For the dereferencing bare pointer, after the internal unsafe encapsulation is adopted, the unsafe action range is reduced, the safe interface call is provided for a developer, and the development process is simplified. To further enhance the security of such unsafe operations, the present embodiment adds an ACSL protocol design to the packaged function. First, the lifecycle of the bare pointer is explicitly noted using 'static'. Even though the range of the lifecycle illustrated by the annotations may exceed the actual lifecycle of the bare pointer, the compiler may do its borrow-off check. When the declared lifecycle is over, the system automatically performs memory reclamation and cleaning in combination with RAII (Resource Acquisition Is Initialization, resource usage initialization) ideas, thereby reducing undefined behavior. Second, the state to which the program is running is constrained using assertions. The logic asserted in the ACSL is to verify the correctness of the code (as shown in fig. 9). I.e., when the predicated code is correct, it can be judged that the code running before the predicate is acceptable. According to the principle, the invention inserts assertion before dereferencing the bare pointer, and judges the validity of the memory from the program execution to the time. For versatility, the present embodiment only determines whether the bare pointer memory address is NULL to provide minimum security guarantee. When the method is specifically used, the context can be combined to increase the judgment of the memory address, so that the safety of the system is better improved.
Step S320, unsafe function and track: annotating the function by adding appropriate pre-and post-conditions to the packaged unsafe function; before the function is called, using the assertion to detect whether the state provided by the current calling environment meets the requirement; in the event that the preconditions are satisfied, assertions are also used to detect the post-condition notes function return state.
For post-package unsafe functions and trail, the unsafe range is reduced. For some internal unsafe functions, however, their security depends on the input or execution environment. The present embodiment uses the design of the ACSL function contract to further illustrate the un-safe i/o execution environment. The logical expression using Hoaro is: the specific verification logic is shown in fig. 8 for the { preconditions } function body { postconditions }. This embodiment adds appropriate pre-and post-conditions to annotate the function for the packaged unsafe function. Before a function is called, assertions are used to detect whether the state provided by the current calling environment meets the requirements. In the event that the preconditions are satisfied, assertions are also used to detect the post-condition notes function return state. If the function passes the detection of the pre-condition and the post-condition, the correctness and the safety of the dispatching process can be ensured.
Compared with the prior art, the security enhancement model development method based on the Rust language combines the substitution-based, internal unsafe encapsulation and ACSL form verification mechanism by designing the security enhancement model, so as to achieve the purposes of minimizing unsafe codes and reducing interaction between unsafe codes and safe codes, thereby reducing complexity of formal verification and improving security of a Rust operating system. The specific advantages include:
1) The method optimizes the unsafe operation of limited classes, so that unsafe complexity in an operating system can be reduced, probability of false negative and false positive in program test is reduced, and safety and reliability of the system are greatly improved.
2) By combining substitution and internal unsafe encapsulation, the scope of unsafe code effects, as well as interactions between the safe and unsafe codes, can be effectively reduced. In addition, the reduction of the unsafe range simplifies formal verification, providing greater potential for designing safer operating systems.
3) The ACSL formalization protocol for the packaged internal unsafe function is used, so that unsafe state sets are limited, and unsafe possibility is reduced.
As shown in fig. 16, fig. 16 is a functional block diagram of an embodiment of a Rust language-based security enhancement model development system provided by the present invention, in this embodiment, the Rust language-based security enhancement model development system includes an equivalent functional replacement design module 10, an internal unsafe package design module 20, and an add-on ACSL formalized design module 30, where the equivalent functional replacement design module 10 is configured to replace two unsafe operations for accessing or modifying variable static variables and unit fields according to a first principle set in advance, statically analyze unsafe reasons, and use standard library functions and/or interfaces to replace the two unsafe operations; an internal unsafe package design module 20, configured to correctly package unsafe operations that cannot be directly replaced in an internal unsafe function before unsafe codes are exposed to unsafe, according to a second principle set in advance; the add-type ACSL formalized design module 30 is configured to use the ACSL formalized protocol concept in the internal unsafe function obtained by encapsulation according to a third rule set in advance, and add the ACSL formalized design verification to the internal unsafe encapsulation function, thereby providing life cycle support and limiting the unsafe state range set.
The equivalent function substitution design module 10 performs static analysis on unsafe reasons according to a first principle set in advance for two unsafe operations of accessing or modifying a variable static variable and a unit field, and uses standard library functions and/or interfaces to replace the two unsafe operations, so that unsafe use is directly reduced. The first principle is a functionally equivalent substitution principle, which is used for substituting the functionally equivalent safe Rust function for unsafe operation of a limited class.
The internal unsafe package design module 20, according to a second principle set in advance, for unsafe operations that cannot be directly replaced, the Rust language programming specification and the demonstration study indicate that internal unsafe is a good method for packaging unsafe codes, and before the unsafe codes are exposed as unsafe, the unsafe operations that cannot be directly replaced are correctly packaged in an internal unsafe function, and in Rust language standard library/core library design, the description is also verified, and a large number of library functions adopt an internal unsafe design model. By internal unsafe is meant a mode in which a function (or interface) implementation has a piece of unsafe code blocks inside that are marked with 'unsafe' keys, which can be seen as safe. Taking std:: sync:: mutex as an example, mutex provides a packaged lock () method, the design of lock () employs an internal insecure architecture, as shown in fig. 11.
The irreplaceable unsafe operation is encapsulated in terms of an internally unsafe form. The method is characterized in that the unreferenced bare pointer is used for calling unsafe functions and realizing unsafe track operation, the unsafe functions are packaged in a safe function according to an internal unsafe principle, a safe interface is provided for a developer, and the unsafe action range and interaction with a safe code are reduced. The second principle is, among other things, the internal unsafe encapsulation principle for encapsulating unavoidable and irreplaceable unsafe operations with internal unsafe conditions.
For the internal unsafe function obtained by using the second principle package, the interaction between the unsafe and the safe is reduced, and a better isolation environment is provided. However, the unsafe of the internal unsafe function is not further guaranteed. Therefore, the add-on ACSL-like formalized design module 30 uses the ACSL formalized protocol concept in the internal unsafe function obtained by encapsulation according to the third rule set in advance, adds on ACSL-like formalized design verification for the internal unsafe encapsulation function, and provides life-cycle support and limits the unsafe state range set. The third principle is to add an ACSL-like formalized design verification principle, and the third principle is to limit an unsafe state set in an internal unsafe function of the package by utilizing an ACSL formalized protocol idea.
Further, please refer to fig. 17, fig. 17 is a functional block diagram of an embodiment of the equivalent functional replacement design module shown in fig. 16, in which the equivalent functional replacement design module 10 includes a replacement unit 11 of a variable static variable type and a replacement unit 12 of a unit type, wherein the replacement unit 11 of the variable static variable type is used to replace the use of the variable static variable by using the intelligent pointer Mutex in Rust; meanwhile, the compiler runs borrowing detection on the intelligent pointer Mutex, and if operation violating borrowing rules is identified to be used and the system is compiled, the operation can pass; if the operation is identified to be used against the borrowing rule and the system is running, the operation is panic and exits; a substitution unit 12 of the unit type for optimizing the operations in the unit using an automated feature detection and substitution-based method; by detecting the use of the units, the units type is replaced with struct keywords.
The unsafe factors for accessing or modifying the variable static variables are mainly derived from the fact that the static variables are placed in the data segments at initialization, and the data in this area occupies these memory spaces all the time during the running of the whole program. Concurrent security issues such as read-after-write, write-after-read, write-write correlation, etc. may occur when multiple threads are accessing the same variable static variable, thereby causing data contention and resulting in deadlock of the system.
The substitution unit 11 of variable static variable type uses the intelligent pointer Mutex < T > in Rust instead of the use of variable static variable (as shown in fig. 4). Mutex adopts the design principle of internal variability, i.e. internal values can be modified without itself being changed, functionally equivalent to accessing or modifying variable static variables. Meanwhile, the compiler can perform borrowing detection on Mutex, when operation violates borrowing rules, the system can pass through when compiling, but can panic and exit when running, so that data competition is prevented, and memory safety is guaranteed. Overall, the goal of completely removing such unsafe operations is achieved by adding some competing overhead in the system.
The unit is similar in definition to struct, but the shared storage between fields (i.e., the same location stores different data types) is the main source of unsafe factors for accessing the unit field. Therefore, when the unit field is accessed, undefined fields in the memory may be accessed, resulting in a failure.
The substitution unit 12 of the unit type uses an automated feature detection and substitution-based method to optimize the operations in the unit. Specifically, by detecting the use of a unit, the unit type is replaced with a struct key. Since all fields are not initialized during the unit initialization, each field needs to be assigned in the structure. The Default track needs to be implemented for the alternate struct type, with the uninitialized fields set by Default assignments. In general, using struct instead of un use directly reduces un-safe usage by adding a small amount of memory overhead, resulting in a reduction in system unsafe factors.
Preferably, referring to fig. 18, fig. 18 is a functional block diagram of an embodiment of the internal unsafe encapsulation design module shown in fig. 16, in which the internal unsafe encapsulation design module 20 includes an encapsulation unit 21 for dereferencing a bare pointer and an encapsulation unit 22 for unsafe functions and track, wherein the encapsulation unit 21 for dereferencing a bare pointer is configured to encapsulate a function of the dereferencing bare pointer using the internal unsafe function by adjusting an architecture of unsafe codes, convert a dereferencing bare pointer function block wrapped with an unsafe key into a library function for scheduling security, and perform an operation of the dereferencing bare pointer using the internal unsafe function inside the library function; the packing unit 22 of unsafe function and track is configured to perform a packing optimization design on the unsafe function and track, adjust the unsafe range in the unsafe function block according to the internal unsafe principle, and replace the unsafe function block with an internal unsafe block.
For dereferencing bare pointer operations, the main source of the unsafe factor is that the certainty of the pointer address is not guaranteed, i.e. the bare pointer always points to a valid address, and there are cases where multiple bare pointers point to the same address space, even NULL. When the operation of dereferencing the bare pointer in Rust accesses an uncertain address, memory access errors may result. Furthermore, the bare pointer does not implement an automatic garbage collection mechanism, and automatic cleaning cannot be implemented, so undefined actions (double-free, use-after-free, etc.) may occur. For example, the pointer is used for dynamic space application, but when the pointer is not released after use, the applied memory block cannot be used by a program any more, and memory leakage and stack overflow are caused by the exhaustion of a memory pool (stack). Since Rust does not provide the bare pointer with the notion of a life cycle, i.e., the system cannot be borrowed, there is a problem with dangling references (i.e., freeing a piece of memory but retaining a pointer to that memory) when pointers beyond its life cycle are used. As shown in fig. 12, it is legal to create a bare pointer by calling the as_mut_ptr () method, and a separate security guarantee is required only when the bare pointer is operated.
The encapsulation unit 21 of the dereferencing bare pointer encapsulates the function of the dereferencing bare pointer with an internal unsafe function by adjusting the architecture of unsafe code, as shown in step_2 of fig. 6. Specifically, the unreferencing bare pointer function block wrapped by the 'unsefe' key is converted into a safe scheduling library function, and an internal unsafe function is used for carrying out the operation of dereferencing the bare pointer inside the library function.
The invoking of the unsecure function or method by the encapsulation unit 22 of the unsecure function and the track is the main representation of the unsecure code. The function or method of insafe must be put into the insafe code block because the function itself is marked as insafe, meaning that there may be a risk of invoking it. Also, when the track contains a method that one or more compilers cannot verify its security, the track must be marked as an unsafe type. To implement the method in the unsafe track, an unsafe flag is first added before the key image. The code block cannot be verified by the compiler in a method of security, the security of which must be ensured by the developer himself. unsafe for the unsafe trace because the implementation method is the unsafe fn, the guarantee of the unsafe trace is equivalent to the reduction of the unsafe fn. When an operating system is designed, the whole unsafe function body is often regarded as a large unsafe block, so that the range of unsafe codes is enlarged, the codes are more dangerous, and unsafe operation is more difficult to find. In this embodiment, the optimization design of the package of the unsafe function and the track adjusts the unsafe range in the unsafe function block according to the internal unsafe principle, and uses the internal unsafe block to replace (as shown in step_2 in fig. 7).
Further, please refer to fig. 19, fig. 19 is a functional block diagram of an embodiment of the add-on ACSL-like formalized design module shown in fig. 16, in which the add-on ACSL-like formalized design module 30 includes a dereferencing bare pointer unit 31 and an unsafe function and track unit 32, wherein the dereferencing bare pointer unit 31 is used for explicitly marking the life cycle of the bare pointer using' static; using the assertion to restrict the state of the program running to the point, inserting the assertion before dereferencing the bare pointer, judging the validity of the memory when the program is executed to the point, and judging that the code running before the assertion is a qualified code if the assertion code is identified to be correct; an unsafe function and track unit 32 for annotating the function with appropriate pre-and post-conditions for the packaged unsafe function; before the function is called, using the assertion to detect whether the state provided by the current calling environment meets the requirement; in the event that the preconditions are satisfied, assertions are also used to detect the post-condition notes function return state.
For the dereferencing bare pointer, after the internal unsafe encapsulation is adopted, the unsafe action range is reduced, the safe interface call is provided for a developer, and the development process is simplified. To further enhance the security of such unsafe operations, the dereferencing bare pointer unit 31 adds an ACSL protocol design to the packaged function. First, the lifecycle of the bare pointer is explicitly noted using 'static'. Even though the range of the lifecycle illustrated by the annotations may exceed the actual lifecycle of the bare pointer, the compiler may do its borrow-off check. When the declared lifecycle is over, the system automatically performs memory reclamation and cleaning in combination with RAII (Resource Acquisition Is Initialization, resource usage initialization) ideas, thereby reducing undefined behavior. Second, the state to which the program is running is constrained using assertions. The logic asserted in the ACSL is to verify the correctness of the code (as shown in fig. 9). I.e., when the predicated code is correct, it can be judged that the code running before the predicate is acceptable. According to the principle, the invention inserts assertion before dereferencing the bare pointer, and judges the validity of the memory from the program execution to the time. For versatility, the present embodiment only determines whether the bare pointer memory address is NULL to provide minimum security guarantee. When the method is specifically used, the context can be combined to increase the judgment of the memory address, so that the safety of the system is better improved.
For post-package unsafe functions and trail, the unsafe range is reduced. For some internal unsafe functions, however, their security depends on the input or execution environment. The unsafe function and track unit 32 uses the design of the ACSL function contract to further illustrate the unsafe input-output execution environment. The logical expression using Hoaro is: the specific verification logic is shown in fig. 8 for the { preconditions } function body { postconditions }. This embodiment adds appropriate pre-and post-conditions to annotate the function for the packaged unsafe function. Before a function is called, assertions are used to detect whether the state provided by the current calling environment meets the requirements. In the event that the preconditions are satisfied, assertions are also used to detect the post-condition notes function return state. If the function passes the detection of the pre-condition and the post-condition, the correctness and the safety of the dispatching process can be ensured.
The following describes a method and a system for developing a security enhancement model based on the Rust language according to the present embodiment with specific embodiments:
a small-sized operating system BlogOS designed by Rust language is selected for constructing the embodiment, and the BlogOS operating system is constructed based on an x 86-64 architecture and comprises the functions of interrupt exception handling, memory management and the like. The safety enhancement model designed by the invention is used for optimizing unsafe operation in the BlogOS operating system so as to achieve the purposes of reducing unsafe operation in the system and improving the safety of the system.
1) Equivalent functional substitution
For accessing the variable static variable, a defect in src\gdt.rs is found that accesses the modified variable static variable, as shown in FIG. 13.
The invention optimizes the error by adopting a Mutex lock to directly replace the error so as to eliminate the unsafe operation, and the optimized result is shown in figure 14.
Since the unit field is not used in the bloos, the optimization process thereof will not be specifically described herein.
2) Internal unsafe encapsulation and ACSL protocol
Taking the example in memory management memory rs as an example, the active_level_4_table () function declared as unsafe is selected for optimization, as shown in fig. 12. The function mainly implements the function of referencing an active 4-level page table, and calls an as_mut_ptr () method inside the function to create a bare pointer, which is safe and legal. Only when the bare pointer is operated is a separate security guarantee required.
By analyzing this function, its unsafe comes mainly from two aspects: (i) The correctness of "physical_memory_offset" should map the physical memory to the correct virtual memory; (ii) dereferencing memory security of the bare pointer. By means of the mechanism of the present invention, the unsafe operation of dereferencing the bare pointer is first encapsulated using an internal unsafe function, reducing the unsafe range. Then, by static analysis, a precondition is added to ensure the correctness of "physical_memory_offset". The repaired code is shown in fig. 15.
Since the unsafe of the unsafe track comes from an unsafe function, the optimization procedure for the unsafe track is similar to the unsafe function. First, analyzing the specific cause of the unsafe; the internally unsafe and ACSL framework proposed according to the present invention then encapsulates and validates it. For unsafe characteristics that cannot be packaged, a front condition and a rear condition are added for each method, so that unsafe state sets are reduced, and subsequent developers can conveniently detect the unsafe state sets.
All the unsafe operations in the examples were checked and optimized according to the above method. Comparing the optimized system with each type of unsafety operation in the original BlogOS, the obtained result is shown in FIG. 20.
As can be seen from FIG. 20, the overall number of unsafe operations is reduced by about 40% when the BlogOS is repaired by using the security enhancement model provided by the present invention. Specific analysis shows that the invention adopts two modes of packaging and substitution, can completely eliminate unsafe operations of dereferencing a bare pointer and accessing and modifying a static variable in a Blogos system, and effectively reduces direct use of unsafete; there is a significant reduction in invoking unsafe methods or functions, and the use of the unsafety method is reduced using the form of function conventions and assertions, while reducing the scope of the unsafety. However, there are some operations in the system that are implemented by calling functions provided by the standard library or core library, which cannot be reduced when the system is optimized using the present invention, since such functions are themselves defined as unsafe methods or tracks. Therefore, the security enhancement model and the security enhancement method can be applied to the design and encapsulation of a standard library or a core library, and optimize the design of a called library function and interface, thereby further designing a safer operating system.
In general, the present invention enables online optimization of the deficiencies of limited classes of Rust insafe operations by means of a combination of substitution and encapsulation. In particular, first, the combined approach of replacement and encapsulation can effectively reduce the range of unsafe segments and the interaction between unsafe codes and secure codes. With the reduction of the unsafe range, the complexity of formal verification is simplified, and more possibilities are provided for designing a safer operating system. Second, ACSL formalized reduction methods (e.g., assertions, function contracts, etc.) are used in internal unsafe functions to reduce the set of unsafe states to reduce unsafe likelihood; and life cycle support for bare pointers may be provided through explicit life cycle annotations. The method isolates the safe codes and the unsafe codes through the scheme, limits the unsafe of unsafe functions, and reduces the unsafe state space set. Furthermore, the present invention is flexible and scalable, allowing developers to further expand the scope of security checks depending on context and experience. With the development of the Rust programming language, more and more research is based on Rust security. Because of the scalability of the present invention, these can be easily integrated into the packaged library functions without excessive adjustments to the source code by the developer. In addition, the invention optimizes the unsafe operation of limited class, directly reduces the occurrence probability of false negative and false positive in the program test, greatly reduces the content which needs to be ensured and standardized in the development process, lightens the development burden of developers and effectively improves the development efficiency and the system safety. The safety model designed by the invention reduces the unsafe action range to the greatest extent possible, achieves the aim of minimizing unsafety, improves the system safety, effectively reduces the state complexity of formal verification, and provides possibility for the application of formal verification in large-scale system software.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention. It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. The method for developing the security enhancement model based on the Rust language is characterized by comprising the following steps of:
equivalent functional replacement design: according to a first principle set in advance, aiming at two unsafe operations of accessing or modifying a variable static variable and a unit field, statically analyzing unsafe reasons, and replacing the two unsafe operations by using standard library functions and/or interfaces;
internal unsafe package design: according to a second principle set in advance, for unsafe operation which cannot be directly replaced, correctly packaging the unsafe operation which cannot be directly replaced in an internal unsafe function before unsafe codes are exposed to be unsafe;
Adding ACSL-like formalized design: according to a third principle set in advance, the ACSL formalization protocol idea is used for packaging the obtained internal unsafe function, and ACSL-like formalization design verification is added for the internal unsafe packaging function, so that life cycle support and unsafe state range limitation collection are provided.
2. The Rust language-based security enhancement model development method of claim 1, wherein the step of equivalent functional replacement design includes:
substitution of variable static variable types: the intelligent pointer Mutex in Rust is used to replace the use of variable static variables; meanwhile, the compiler runs borrowing detection on the intelligent pointer Mutex, and if operation violating borrowing rules is identified to be used and the system is compiled, the operation can pass; if the operation is identified to be used against the borrowing rule and the system is running, the operation is panic and exits;
substitution of the unit type: optimizing operations in the unit using automated feature detection and replacement based methods; by detecting the use of the units, the units type is replaced with struct keywords.
3. The Rust language based security enhancement model development method of claim 1, wherein the step of internal unsafe package design includes:
Encapsulation of dereferencing bare pointers: the method comprises the steps of packaging the function of a dereferencing bare pointer by using an internal unsafe function through adjusting the architecture of unsafe codes, converting a dereferencing bare pointer function block wrapped by an unsafe key into a library function with safe dispatching, and performing the operation of the dereferencing bare pointer by using the internal unsafe function inside the library function;
encapsulation of unsafe functions and track: and (3) carrying out packaging optimization design on the unsafe function and the track, adjusting the unsafe range in the unsafe function block according to the internal unsafe principle, and replacing by using the internal unsafe block.
4. The Rust language-based security enhancement model development method of claim 1, wherein the step of adding an actsl-like formalized design includes:
dereferencing the bare pointer: explicitly annotating the lifecycle of the bare pointer using' static; using the assertion to restrict the state of the program running to the point, inserting the assertion before dereferencing the bare pointer, judging the validity of the memory when the program is executed to the point, and judging that the code running before the assertion is a qualified code if the assertion code is identified to be correct;
unsafe function and track: annotating the function by adding appropriate pre-and post-conditions to the packaged unsafe function; before the function is called, using the assertion to detect whether the state provided by the current calling environment meets the requirement; in the event that the preconditions are satisfied, assertions are also used to detect the post-condition notes function return state.
5. The method for developing a security enhancement model based on the Rust language according to claim 1, wherein the first principle is a functionally equivalent substitution principle for substituting a functionally equivalent security Rust function for a limited class of unsafe operation; the second principle is an internal unsafe encapsulation principle for encapsulating unavoidable and irreplaceable unsafe operations with internal unsafe; the third principle is to add an ACSL-like formalized design verification principle, and the third principle is used for limiting an unsafe state set in an internal unsafe function of the package by utilizing an ACSL formalized protocol idea.
6. A Rust language-based security enhancement model development system, comprising:
the equivalent function substitution design module (10) is used for carrying out static analysis on unsafe reasons according to a first principle set in advance aiming at accessing or modifying two unsafe operations of a variable static variable and a unit field, and replacing the two unsafe operations by using standard library functions and/or interfaces;
an internal unsafe encapsulation design module (20) which is used for correctly encapsulating unsafe operation which cannot be directly replaced in an internal unsafe function before unsafe codes are exposed to be unsafe according to a second preset principle;
And the ACSL-like formalized design module (30) is used for adding ACSL-like formalized design verification for the internal unsafe encapsulation function by using the ACSL formalized protocol idea in the internal unsafe function obtained by encapsulation according to a third preset principle, and providing life cycle support and limiting unsafe state range set.
7. The Rust language based security enhancement model development system of claim 6, wherein the equivalent functional replacement design module (10) comprises:
a substitution unit (11) of variable static variable type for substituting the use of variable static variables using the intelligent pointer Mutex in Rust; meanwhile, the compiler runs borrowing detection on the intelligent pointer Mutex, and if operation violating borrowing rules is identified to be used and the system is compiled, the operation can pass; if the operation is identified to be used against the borrowing rule and the system is running, the operation is panic and exits;
a substitution unit (12) of the unit type for optimizing the operations in the unit using an automated feature detection and substitution-based method; by detecting the use of the units, the units type is replaced with struct keywords.
8. The Rust language based security enhancement model development system of claim 6, wherein the internal unsafe package design module (20) includes:
a packaging unit (21) for the dereferencing bare pointer, which is used for packaging the function of the dereferencing bare pointer by using an internal unsafe function through adjusting the architecture of unsafe codes, converting the dereferencing bare pointer function block wrapped by the unsafe key into a library function with safe dispatching, and performing the operation of the dereferencing bare pointer by using the internal unsafe function inside the library function;
and the packaging unit (22) is used for carrying out packaging optimization design on the unsafe function and the track, adjusting the unsafe range in the unsafe function block according to the internal unsafe principle and replacing the unsafe function block by using the internal unsafe block.
9. The Rust language based security enhancement model development system of claim 6, wherein the add-on ACSL formalized design module (30) comprises:
a dereferencing bare pointer unit (31) for explicitly annotating the lifecycle of the bare pointer using' static; using the assertion to restrict the state of the program running to the point, inserting the assertion before dereferencing the bare pointer, judging the validity of the memory when the program is executed to the point, and judging that the code running before the assertion is a qualified code if the assertion code is identified to be correct;
An unsafe function and track unit (32) for annotating the function with appropriate pre-and post-conditions for the packaged unsafe function; before the function is called, using the assertion to detect whether the state provided by the current calling environment meets the requirement; in the event that the preconditions are satisfied, assertions are also used to detect the post-condition notes function return state.
10. The Rust language-based security enhancement model development system of claim 6, wherein the first principle is a functionally equivalent substitution principle for substituting first a functionally equivalent secure Rust function for a limited class of unsafe operation; the second principle is an internal unsafe encapsulation principle for encapsulating unavoidable and irreplaceable unsafe operations with internal unsafe; the third principle is to add an ACSL-like formalized design verification principle, and the third principle is used for limiting an unsafe state set in an internal unsafe function of the package by utilizing an ACSL formalized protocol idea.
CN202310750401.8A 2023-06-25 2023-06-25 Rust language-based safety enhancement model development method and system Active CN116484439B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310750401.8A CN116484439B (en) 2023-06-25 2023-06-25 Rust language-based safety enhancement model development method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310750401.8A CN116484439B (en) 2023-06-25 2023-06-25 Rust language-based safety enhancement model development method and system

Publications (2)

Publication Number Publication Date
CN116484439A CN116484439A (en) 2023-07-25
CN116484439B true CN116484439B (en) 2023-09-01

Family

ID=87218176

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310750401.8A Active CN116484439B (en) 2023-06-25 2023-06-25 Rust language-based safety enhancement model development method and system

Country Status (1)

Country Link
CN (1) CN116484439B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116680705B (en) * 2023-07-31 2023-12-12 中国人民解放军国防科技大学 Rust program defect automatic detection method and system based on feature extraction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113672273A (en) * 2021-10-21 2021-11-19 阿里云计算有限公司 Data processing method, system and equipment
CN116089302A (en) * 2023-02-21 2023-05-09 东北大学 Method for detecting UNSAFE code fragment defects in Rust programming language
WO2023101574A1 (en) * 2021-12-03 2023-06-08 Limited Liability Company Solar Security Method and system for static analysis of binary executable code
CN116305163A (en) * 2023-04-03 2023-06-23 浙江大学 Rust language-oriented vulnerability automatic positioning and analyzing method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9569612B2 (en) * 2013-03-14 2017-02-14 Daniel Shawcross Wilkerson Hard object: lightweight hardware enforcement of encapsulation, unforgeability, and transactionality
US20220214881A1 (en) * 2022-03-16 2022-07-07 Intel Corporation Ratchet pointers to enforce byte-granular bounds checks on multiple views of an object

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113672273A (en) * 2021-10-21 2021-11-19 阿里云计算有限公司 Data processing method, system and equipment
WO2023101574A1 (en) * 2021-12-03 2023-06-08 Limited Liability Company Solar Security Method and system for static analysis of binary executable code
CN116089302A (en) * 2023-02-21 2023-05-09 东北大学 Method for detecting UNSAFE code fragment defects in Rust programming language
CN116305163A (en) * 2023-04-03 2023-06-23 浙江大学 Rust language-oriented vulnerability automatic positioning and analyzing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
RSMC:A Safety Model Checker for Concurrency and Memory Safety of Rust;YAN Fei;WANG Qizhong;ZHANG Liqiang;CHEN Yasha;;Wuhan University Journal of Natural Sciences(第02期);全文 *

Also Published As

Publication number Publication date
CN116484439A (en) 2023-07-25

Similar Documents

Publication Publication Date Title
Xie et al. Saturn: A scalable framework for error detection using boolean satisfiability
Abadi et al. Types for safe locking: Static race detection for Java
Ŝevčik et al. Relaxed-memory concurrency and verified compilation
Aldrich et al. Static analyses for eliminating unnecessary synchronization from Java programs
US6959432B2 (en) Process and system for developing mathematically validated object-oriented software
US7810080B2 (en) Automated safe secure techniques for eliminating undefined behavior in computer software
CN116484439B (en) Rust language-based safety enhancement model development method and system
Monteiro et al. Model checking C++ programs
Burnim et al. NDSeq: Runtime checking for nondeterministic sequential specifications of parallel correctness
Even-Mendoza et al. CsmithEdge: more effective compiler testing by handling undefined behaviour less conservatively
Milewicz et al. Runtime checking c programs
Lee et al. Interactive program debugging and optimization for directive-based, efficient gpu computing
Štill et al. Using off-the-shelf exception support components in C++ verification
Mezzetti et al. Type unsoundness in practice: An empirical study of Dart
Schirmer Analysing the Java package/access concepts in Isabelle/HOL
Benjamin et al. Runtime Annotation Checking with Frama-C: The E-ACSL Plug-in
Armando et al. Model checking linear programs with arrays
Stärk et al. Java bytecode verification is not possible
Yin et al. SafeOSL: Ensuring memory safety of C via ownership‐based intermediate language
Stärk et al. The problem of bytecode verification in current implementations of the JVM
Marriott Checking Memory Safety of Level 1 Safety-Critical Java Programs using Static-Analysis without Annotations
Scherer Engineering of Reliable and Secure Software via Customizable Integrated Compilation Systems
Tan JNI Light: An operational model for the core JNI
de Castro Correia Towards a Memory Safe C with Rust-Inspired Checks
Luckcuck Safety-Critical Java Level 2: Applications, Modelling, and Verification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant