CN117633914B - Chip-based password resource scheduling method, device and storage medium - Google Patents

Chip-based password resource scheduling method, device and storage medium Download PDF

Info

Publication number
CN117633914B
CN117633914B CN202410105814.5A CN202410105814A CN117633914B CN 117633914 B CN117633914 B CN 117633914B CN 202410105814 A CN202410105814 A CN 202410105814A CN 117633914 B CN117633914 B CN 117633914B
Authority
CN
China
Prior art keywords
instruction
thread
algorithm
core
instruction sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410105814.5A
Other languages
Chinese (zh)
Other versions
CN117633914A (en
Inventor
程生根
陈强
刘峰
罗鹏
马博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Open Security Research Inc
Original Assignee
Open Security Research Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Open Security Research Inc filed Critical Open Security Research Inc
Priority to CN202410105814.5A priority Critical patent/CN117633914B/en
Publication of CN117633914A publication Critical patent/CN117633914A/en
Application granted granted Critical
Publication of CN117633914B publication Critical patent/CN117633914B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/71Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information
    • G06F21/72Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information in cryptographic circuits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Mathematical Physics (AREA)
  • Computer Security & Cryptography (AREA)
  • Storage Device Security (AREA)

Abstract

The application discloses a chip-based password resource scheduling method, which comprises the steps of receiving a first password resource service request, wherein the first password resource service request at least comprises a first instruction sequence; determining a first instruction dispatcher corresponding to the first instruction sequence, wherein the first instruction dispatcher is associated with a first instruction cache queue, a first thread pool and a first password resource pool; adding the first instruction sequence to the first instruction cache queue, and determining a first thread from the first thread pool; determining a first algorithm IP core corresponding to the first instruction sequence from the first password resource pool; and reading the first instruction sequence, and calling the first thread to execute the corresponding processing of the first algorithm IP core on the first instruction sequence. According to the application, the multi-line Cheng Shixian multi-algorithm IP core parallel execution function of the operating system is adopted, so that the performance of the algorithm IP core and the password resource service in the whole HSM algorithm system is fully exerted, and the competitiveness of a safety product is improved.

Description

Chip-based password resource scheduling method, device and storage medium
Technical Field
The present application relates to the field of information security technologies, and in particular, but not limited to, a method, an apparatus, and a storage medium for chip-based cryptographic resource scheduling.
Background
The algorithm IP (Intellectual Property, IP) core integration has the characteristics of high safety, high data processing speed and no occupation of operation resources of the main controller when providing the password service, and is suitable for the practical application needs of the security chip in the design of the multifunctional security chip. In the related art, the invocation of a plurality of algorithm IP cores is usually implemented based on a bare metal program, and because the algorithm IP cores do not support the execution success of an interrupt notification instruction, a polling mechanism is required to wait for the execution success of the instruction by the algorithm IP cores, the program can only execute the function of one algorithm IP core at each time, and in the polling process of the program, a central processing unit (Central Processing Unit, CPU) cannot execute the functions of other algorithm IP cores, so that the performance of the algorithm IP cores cannot be fully exerted.
Disclosure of Invention
The embodiment of the application provides a chip-based password resource scheduling method, equipment and a storage medium, which can utilize the purpose of parallel execution of functions of multi-algorithm IP cores of an operating system multi-line Cheng Shixian, fully exert the performance of password resource service in the algorithm IP cores and the whole HSM algorithm system and improve the competitiveness of security products.
The technical scheme of the embodiment of the application is realized as follows:
In a first aspect, an embodiment of the present application provides a chip-based cryptographic resource scheduling method, applied to a security chip, where the method includes:
Receiving a first password resource service request, wherein the first password resource service request at least comprises a first instruction sequence;
Determining a first instruction dispatcher corresponding to the first instruction sequence, wherein the first instruction dispatcher is associated with a first instruction cache queue, a first thread pool and a first password resource pool;
Adding the first instruction sequence to the first instruction cache queue and determining a first thread from the first thread Cheng Chizhong, the first thread pool including at least two threads;
determining a first algorithm IP core corresponding to the first instruction sequence from the first password resource pool, wherein the first password resource pool at least comprises two algorithm IP cores;
and reading the first instruction sequence from the first instruction cache queue, and calling the first thread to execute the processing corresponding to the first algorithm IP core on the first instruction sequence.
In a second aspect, an embodiment of the present application provides an electronic device, including a receiving unit, a first determining unit, a selecting unit, a second determining unit, and an executing unit;
The receiving unit is used for receiving a first password resource service request, and the first password resource service request at least comprises a first instruction sequence;
a first determining unit, configured to determine a first instruction scheduler corresponding to the first instruction sequence, where the first instruction scheduler is associated with a first instruction cache queue, a first thread pool, and a first password resource pool;
A selection unit configured to add the first instruction sequence to the first instruction cache queue, and determine a first thread from the first thread Cheng Chizhong, where the first thread pool includes at least two threads;
a second determining unit, configured to determine a first algorithm IP core corresponding to the first instruction sequence from the first cryptographic resource pool, where the first cryptographic resource pool includes at least two algorithm IP cores;
And the execution unit is used for reading the first instruction sequence from the first instruction cache queue and calling the first thread to execute the processing corresponding to the first algorithm IP core on the first instruction sequence.
In a third aspect, an embodiment of the present application provides an electronic device, including a host CPU, a security chip, and a communication port;
The communication port is used for realizing communication connection between the host CPU and the security chip;
And when the host CPU runs the password resource scheduling program, controlling the security chip to realize the step of the chip-based password resource scheduling method.
In a fourth aspect, an embodiment of the present application provides a storage medium, i.e. a computer readable storage medium, on which a computer program is stored, which when executed by a host CPU, implements the steps of the above-described chip-based cryptographic resource scheduling method.
The chip-based password resource scheduling method, the chip-based password resource scheduling device and the storage medium provided by the embodiment of the application solve the defect that a CPU can only execute one algorithm IP core at each time by performing the scheduling processing of instructions based on multiple threads in the safety hardware module (Hardware Security Module, HSM), and can access a plurality of algorithm IP cores in parallel, thereby realizing the purpose of parallel execution of the multiple algorithm IP cores, fully playing the performance of password resource service in the algorithm IP cores and the whole HSM algorithm system, and improving the competitiveness of safety products.
Drawings
FIG. 1 is a schematic diagram of an alternative flow chart of a chip-based cryptographic resource scheduling method according to an embodiment of the present application;
FIG. 2 is an alternative distribution architecture diagram of the instruction dispatch modules provided by embodiments of the present application;
FIG. 3 is a second flowchart of an alternative method for scheduling chip-based cryptographic resources according to an embodiment of the present application;
FIG. 4 is a schematic diagram III of an alternative flow chart of a chip-based cryptographic resource scheduling method according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an alternative flow chart of a chip-based cryptographic resource scheduling method according to an embodiment of the present application;
FIG. 6 is an alternative software class diagram of a chip-based cryptographic resource scheduling method provided by an embodiment of the application;
FIG. 7 is an alternative program timing diagram of a chip-based cryptographic resource scheduling method provided by an embodiment of the application;
FIG. 8 is an alternative schematic structural view of an electronic device provided by an embodiment of the present application;
fig. 9 is a schematic diagram of an alternative structure of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
It should be appreciated that reference throughout this specification to "an embodiment of the present application" or "the foregoing embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrase "in an embodiment of the application" or "in the foregoing embodiments" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In various embodiments of the present application, the sequence number of each process does not mean the sequence of execution, and the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
Without being specifically illustrated, the electronic device may perform any step in the embodiments of the present application, and the processor of the electronic device may perform the step. It is further noted that the embodiment of the present application does not limit the sequence of the following steps performed by the electronic device. In addition, the manner in which the data is processed in different embodiments may be the same method or different methods. It should be further noted that any step in the embodiments of the present application may be executed by the electronic device independently, that is, the electronic device may not depend on execution of other steps when executing any step in the embodiments described below.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The embodiment of the application provides a chip-based password resource scheduling method, which can be realized by electronic equipment containing the method in practical application, and each functional entity in the electronic equipment can be cooperatively realized by hardware resources in the electronic equipment (such as terminal equipment), such as computing resources of a processor and the like, and communication resources (such as various modes for supporting communication of optical cables, cells and the like).
Embodiments of a chip-based cryptographic resource scheduling method, apparatus, and storage medium according to embodiments of the present application are described below. The embodiment of the application provides a chip-based password resource scheduling method, which is applied to a security chip, as shown in fig. 1, and comprises the following steps:
S101, receiving a first password resource service request, wherein the first password resource service request at least comprises a first instruction sequence.
In the embodiment of the application, the first cryptographic resource service request is information for executing specific operation in a specific secure cryptographic environment to the server. Here, the first cryptographic resource service request may be a functional service request for executing a plurality of cryptographic algorithm IP cores preset in the HSM, where the cryptographic algorithm IP core may be understood as an algorithm IP core in the embodiment of the present application, and the first cryptographic resource service request may include at least a first instruction sequence and execution data required for executing the first instruction sequence. Here, the plurality of cryptographic algorithm IP cores may be integrated into a single hardware platform HSM, where the HSM is configured to provide services such as encryption/decryption, signature/verification, and the like. In the embodiment of the application, the HSM is core data such as a key for protecting and managing the cryptographic resource service and computer hardware equipment for providing the cryptographic resource service such as encryption, decryption, signature verification and the like, and can be integrated into a chip to be used as a part of a chip circuit.
In the embodiment of the application, the HSM can comprise corresponding firmware and drivers and can be applied to a financial payment system or an automobile electronic system and the like. Under the condition of being applied to an automobile electronic system, abstract encryption and decryption services are provided for upper-layer applications and other basic software of a vehicle in a mode of interprocess communication or shared memory, the safety requirements of the vehicle are met, and the capability of defending attacks of vehicle-mounted network and application of vehicle wireless communication technology (Vehicle to Everything, V2X) is improved. In the embodiment of the application, the HSM can be used as an independent security chip or a partition on the security chip, and a security function is embedded in a main processor in the field of automobile network security.
In the embodiment of the application, the cryptographic resource can be an algorithm IP core for providing a cryptographic algorithm service, wherein the algorithm IP core is a verified and reusable circuit module with a specific encryption function in the security chip, and has standardization and tradable property. In the embodiment of the application, the password resource can comprise a plurality of types of algorithm IP cores.
In an embodiment of the present application, the cryptographic resources may include a symmetric key algorithm (SYMMETRIC KEY ENGINE, SKE) IP core, an asymmetric key algorithm (Public KEY ENGINE, PKE) IP core, a true random number (True Random Number Generator, TRNG) IP core, a HASH algorithm (HASH Function Engine, HASH) IP core, and the like.
In the embodiment of the application, a second password resource service request may also be received, where the second password resource service request includes at least a second instruction sequence, and the second instruction sequence and the first instruction sequence may correspond to the same kind of algorithm IP core or respectively correspond to different kinds of algorithm IP cores, and are used to send request information for executing functions of the same kind or different kinds of algorithm IP cores.
In the embodiment of the present application, the HSM may receive the first cryptographic resource service request based on a communication port, where the communication port is used for receiving and transmitting an instruction sequence, and in some possible implementations, the communication port may be a uart (Universal Asynchronous Receiver-Transmitter) serial communication port or a mailbox (mail box) CPU inter-core communication port.
S102, determining a first instruction dispatcher corresponding to the first instruction sequence, wherein the first instruction dispatcher is associated with a first instruction cache queue, a first thread pool and a first password resource pool.
In the embodiment of the application, the first instruction dispatcher is used for realizing system parallelism by dispatching the instruction sequence, so that the program runs efficiently under the condition of instruction parallelism.
In the embodiment of the present application, a single type of algorithm IP core may perform functions in the form of a single instruction scheduling module, that is, in the embodiment of the present application, a plurality of instruction scheduling modules may be included to implement functions of different kinds of cryptographic algorithms. In the single instruction scheduling module, a thread pool, an instruction cache queue and a password resource pool can be associated with the instruction scheduler. In the first instruction scheduling module, the first thread pool, the first instruction cache queue and the first password resource pool corresponding to the first instruction scheduler are also included in association with the first instruction scheduler, and the number of threads in the first thread pool corresponding to the first instruction scheduler is consistent with the number of algorithm IP cores in the first password resource pool, that is, in the embodiment of the present application, each instruction scheduler is attached with one thread pool and one password resource pool at the same time in each instruction scheduling module, and the number of threads contained in the thread pool is consistent with the number of algorithm IP cores contained in the password resource pool. Illustratively, in the case that there are 2 algorithmic IP cores in the first cryptographic resource pool corresponding to the first instruction scheduler, 2 threads are correspondingly included in the first thread pool.
The following description is made in connection with fig. 2: the embodiment of the application can comprise four instruction scheduling modules, namely a SKE scheduler 211, a PKE scheduler 221, a HASH scheduler 231 and a TRNG scheduler 241, and specifically, an instruction cache queue 214 which is associated with the SKE scheduler 211 and exists and corresponds to a thread pool 212, a SKE algorithm IP core pool 213 and the thread pool 212; also co-existing with the PKE scheduler 221 is a thread pool 222, a PKE algorithm IP core pool 223, and an instruction cache queue 224 corresponding to the thread pool 222; an instruction cache queue 234 co-existing with the HASH scheduler 231 and including a thread pool 232, a HASH algorithm IP core pool 233, and a thread pool 232, and an instruction cache queue 244 co-existing with the TRNG scheduler 241 and including a thread pool 242, a TRNG algorithm IP core pool 243, and a thread pool 242; the SKE algorithm IP core pool 213 includes a total of a algorithm IP cores from SKE core 1 to SKE core a, and correspondingly, the thread pool 212 includes a threads; similarly, there are b algorithm IP cores in PKE algorithm IP core pool 223 and b threads in thread pool 222; there are c algorithm IP cores in HASH algorithm IP core pool 233 and c threads in thread pool 232; there are d algorithm IP cores in TRNG algorithm IP core pool 243 and d threads in thread pool 242. In the embodiment of the application, the number of threads in a thread pool and the number of algorithm IP cores in a password resource pool of each instruction scheduling module are determined according to the parallel processing capacity of the algorithm IP cores in the HSM algorithm system so as to fully exert the performance of the HSM algorithm system. Based on this, in the embodiment of the present application, there is no limitation on the number.
In some possible implementations, the first instruction scheduler may be any one of SKE scheduler 211, PKE scheduler 221, HASH scheduler 231, TRNG scheduler 241.
S103, adding the first instruction sequence to the first instruction cache queue, and determining a first thread from the first thread Cheng Chizhong, wherein the first thread pool at least comprises two threads.
In an embodiment of the present application, the first instruction cache queue is configured to cache the received first instruction sequence. Here, the first instruction cache queue is responsible for caching the received at least two instruction sequences for the first instruction scheduler in case at least two algorithmic IP cores executing in parallel correspond to the first instruction scheduler at the same time. In the embodiment of the application, when at least two algorithm IP cores executed in parallel do not correspond to the first instruction dispatcher at the same time, a plurality of received instruction sequences are respectively added to an instruction cache queue associated with the instruction dispatcher corresponding to each instruction sequence.
In the embodiment of the application, the first thread pool is created according to the number of the algorithm IP cores in the first password resource pool when the program is started, and is used for receiving the instruction sequence in the first password resource service request and calling the corresponding instruction processing object to process the instruction sequence. When the first password resource service request is received, the created threads are multiplexed to process a plurality of instruction sequences, so that message analysis and processing of the execution instruction are realized. Specifically, when the HSM needs to execute the first cryptographic resource service request, the first thread Cheng Chihui fetches an idle thread from the first thread to execute the first cryptographic resource service request, and after the execution of the first cryptographic resource service request is completed, the thread is put back into the first thread Cheng Chizhong to wait for the next cryptographic resource service request. The implementation manner of thread multiplexing in the embodiment of the application is not limited to the thread pool manner, and other implementation manners such as using thread local storage (Thread Local Storage, TLS) can be adopted on the basis of ensuring the running efficiency and performance of the program.
In the embodiment of the present application, the first thread pool includes at least two threads, and when the received first instruction sequence runs in the first thread, the rest threads in the ready state can be in an idle state.
In the embodiment of the present application, determining the first thread corresponding to the first instruction sequence may refer to determining at least one idle thread from the first thread Cheng Chizhong, then determining an idle thread, i.e., the first thread, from the at least one idle thread, and waking up the idle thread as a working thread, for executing the message parsing and processing of the first instruction sequence.
S104, determining a first algorithm IP core corresponding to the first instruction sequence from the first password resource pool, wherein the first password resource pool at least comprises two algorithm IP cores.
In the embodiment of the application, the first password resource pool is used for managing and maintaining the use state of the algorithm IP core and distributing the functions of the algorithm IP core. When the received first password resource service request arrives, a first algorithm IP core corresponding to the first instruction sequence needs to be determined from the first password resource pool before the first instruction sequence is executed.
In an embodiment of the present application, the first cryptographic resource pool may include one or more algorithmic IP cores. In some possible implementations, the algorithm IP core included in the first cryptographic resource pool may be any one of SKE, PKE, HASH, TRNG four algorithm IP cores.
In the embodiment of the application, the number of the algorithm IP cores in the first password resource pool is determined according to the equipment requirement in the actual application process, and therefore, in the embodiment of the application, the number of the algorithm IP cores in the first password resource pool is not limited.
S105, the first instruction sequence is read from the first instruction cache queue, and the first thread is called to execute the processing corresponding to the first algorithm IP core on the first instruction sequence.
In the embodiment of the application, the awakened first thread reads a first instruction sequence from the first instruction cache queue and invokes a driver of the corresponding first algorithm IP core to execute the corresponding function of the first algorithm IP core.
In the embodiment of the application, the awakened first thread reads the first instruction sequence from the first instruction cache queue, and after the first thread analyzes the first instruction sequence, the first thread submits the first instruction sequence corresponding to the first task to the first algorithm IP core, so that the first algorithm IP core executes the first task until the first task is successfully executed.
It should be noted that, in the embodiment of the present application, the host CPU may process the program of the HSM algorithm system through a plurality of CPU cores, and may be oriented to the high performance computing fields such as the company-level application and the data center. In the embodiment of the present application, the host CPU adopts a multi-CPU core design, where the host CPU may be understood as a CPU in the embodiment of the present application.
In the embodiment of the present application, the execution result of the first algorithm IP core function may be obtained by an instruction processing object corresponding to the first instruction sequence, and output to the CPU through the communication port.
According to the chip-based password resource scheduling method provided by the embodiment of the application, based on the thread pool and the password resource pool in each instruction scheduling module in the HSM, the CPU determines idle threads from the thread pool, determines the algorithm IP core corresponding to the instruction sequence from the password resource pool, then executes the function of the algorithm IP core based on the determined idle threads, and the plurality of threads in the thread pool perform scheduling processing of the instructions, so that the defect that the CPU can only execute one algorithm IP core at each time is overcome, and a plurality of algorithm IP cores can be accessed in parallel, thereby realizing the purpose of parallel execution of the multi-algorithm IP cores, fully playing the performance of the algorithm IP cores and password resource service in the whole HSM algorithm system, and improving the competitiveness of safety products.
In some embodiments, the "determining the first instruction scheduler corresponding to the first instruction sequence" in the above step S102 may be implemented by the following steps S301 to S302, and the following description will be made in connection with the steps shown in fig. 3:
s301, determining a first instruction processing object corresponding to the first instruction sequence based on the instruction type of the first instruction sequence.
The first instruction processing object corresponding to the first instruction sequence may be obtained according to an instruction type of the first instruction sequence; in one possible implementation, the instruction type of the first instruction sequence may be derived from an instruction sequence number (Identity, ID), the instruction sequences of the same instruction type corresponding to the same ID, the same ID corresponding to the same instruction processing object.
S302, a first number representing the first instruction scheduler is obtained based on the first instruction processing object, and the first instruction scheduler is determined based on the first number.
In the embodiment of the present application, the first instruction processing object may include instruction processing method information corresponding to the first instruction sequence, where the instruction processing method information includes a first number corresponding to the first instruction sequence and representing the first instruction scheduler, and further after the first instruction processing object is determined, the first instruction scheduler may be found based on the first number.
That is, in the embodiment of the present application, the determination of the first instruction scheduler may be that the first instruction processing object corresponding to the first instruction sequence is determined first, and then the determination is based on the first number given by the first instruction processing object and characterizing the first instruction scheduler.
It should be noted that, in the embodiment of the present application, the determining process of the first instruction dispatcher is also an initializing process for the obtained first instruction sequence, and the first instruction sequence after initialization in the form of the package structure is obtained through the determination of the first instruction dispatcher.
In some embodiments, the "invoking the first thread to execute the processing corresponding to the first algorithm IP core on the first instruction sequence" in the step S105 may be implemented by the following steps S401 to S402, and the following description will be made in connection with the steps shown in fig. 4:
S401, calling a first instruction processing object corresponding to the first instruction sequence based on the first thread.
In the embodiment of the application, after the activated first thread reads the first data packet matched with the first instruction sequence from the first instruction cache queue, the first thread calls the first instruction processing object, and the first instruction processing object analyzes the first data packet in the first thread to obtain an analysis result. The first instruction processing object comprises instruction processing method information corresponding to a first instruction sequence.
S402, calling the first algorithm IP to check the first instruction sequence for processing based on the first instruction processing object.
In the embodiment of the present application, based on the analysis result obtained in step S401, the first instruction processing object determines the first algorithm IP from the first cryptographic resource pool, and after the first thread submits the first task corresponding to the first instruction sequence to the first algorithm IP, the first instruction processing object invokes the corresponding driver of the first algorithm IP core to execute the function of the first algorithm IP core.
In some embodiments, the first instruction sequence added to the first instruction cache queue in step S103 provided in the foregoing embodiments refers to the initialized first instruction sequence, that is, a first data packet matched with the first instruction sequence, where the first data packet includes at least information of the first instruction sequence itself, execution data required by the system to execute the first instruction sequence, a first instruction processing object, a first instruction dispatcher, and the like.
Here, the first packet is added to the first instruction cache queue for the thread to call.
In some embodiments, the step S104 provided in the above embodiments may be implemented by the following steps S501 to S502, and the following description will be made in connection with the steps shown in fig. 5:
s501, determining at least one idle algorithm IP core in the first password resource pool.
In the embodiment of the present application, the first cryptographic resource pool may include a plurality of algorithm IP cores of the same type, and the first instruction processing object determines a free algorithm IP core in the first cryptographic resource pool, where the number of determined free algorithm IP cores includes at least one.
S502, determining a first algorithm IP core corresponding to the first instruction sequence from the at least one idle algorithm IP core.
In the embodiment of the application, after determining a plurality of idle algorithm IP cores in the first cryptographic resource pool, the first instruction processing object may sequentially select a first idle algorithm IP core from the plurality of idle algorithm IP cores as the first algorithm IP core according to the numbering sequence of the algorithm IP cores, or may randomly select one idle algorithm IP core from the plurality of idle algorithm IP cores as the first algorithm IP core.
And determining the idle algorithm IP core from the first password resource pool to execute the function corresponding to the algorithm IP core, so that flexible scheduling of the algorithm IP core is realized by taking the idle algorithm IP core in the password resource pool.
The method for scheduling the password resources provided by the embodiment of the application can be applied to the following scenes:
The first thread pool comprises 4 threads, namely a thread 1, a thread 2, a thread 3 and a thread 4, and the second thread pool comprises 4 threads, namely a thread 5, a thread 6, a thread 7 and a thread 8; when the HSM algorithm system receives instruction data, a corresponding first instruction processing object is found according to the type of an instruction so as to determine that the instruction corresponds to the first instruction scheduler, and at the moment, idle threads in threads 1 to 4 included in the first thread pool are determined to process the received instruction data.
In addition, in the present scenario, the first cryptographic resource pool includes algorithm IP core 1, algorithm IP core 2, algorithm IP core 3 and algorithm IP core 4, and the second cryptographic resource pool includes 4 algorithm IP cores, i.e., algorithm IP core 5, algorithm IP core 6, algorithm IP core 7 and algorithm IP core 8, at the same time. And determining an idle algorithm IP core from the algorithm IP cores 1 to 4 included in the first password resource pool corresponding to the first instruction dispatcher, and executing the function of the acquired algorithm IP core by adopting a processing method of an instruction processing object.
In some embodiments, the chip-based cryptographic resource scheduling method provided in the foregoing embodiments may further include the following:
receiving a second cryptographic resource service request, the second cryptographic resource service request comprising at least a second sequence of instructions, the second sequence of instructions corresponding to the first instruction scheduler;
Adding the second instruction sequence to the first instruction cache queue and determining a second thread from the first line Cheng Chizhong; determining a second algorithm IP core corresponding to the second instruction sequence from the first password resource pool; and reading the second instruction sequence from the first instruction cache queue, and calling the second thread to execute the processing corresponding to the second algorithm IP core on the second instruction sequence.
In the embodiment of the present application, while a first instruction sequence is added to the first instruction cache queue, a first thread is determined from the first thread Cheng Chizhong, a first algorithm IP core corresponding to the first instruction sequence is determined from the first cryptographic resource pool, then the first instruction sequence is read from the first instruction cache queue, the CPU core 1 calls the first thread to execute processing corresponding to the first algorithm IP core on the first instruction sequence, and at the same time, the first instruction scheduler determines a second thread from the first thread Cheng Chizhong, adds a second instruction sequence to the first instruction cache queue, then determines a second algorithm IP core corresponding to the second instruction sequence from the first cryptographic resource pool, and then the CPU core 2 reads the second instruction sequence from the first instruction cache queue through the second thread, and processes the second instruction sequence by adopting the second algorithm IP core based on the second thread, that is, in the embodiment of the present application, a plurality of IP cores of the same type can be implemented in parallel.
In some embodiments, the chip-based cryptographic resource scheduling method provided in the foregoing embodiments may further include the following:
Receiving a third cryptographic resource service request, the third cryptographic resource service request including at least a third sequence of instructions, the third sequence of instructions corresponding to the first instruction scheduler;
And under the condition that the first thread submits a first task corresponding to the first instruction sequence to the first algorithm IP core, switching a working thread from the first thread to a third thread corresponding to the third instruction sequence, reading the third instruction sequence from the first instruction cache queue, and calling the third thread to execute processing corresponding to the third algorithm IP core corresponding to the third instruction sequence on the third instruction sequence.
In the embodiment of the present application, the third instruction sequence and the first instruction sequence may be executed alternately based on the same CPU core, where the switching between threads may be active or passive. In some possible implementations, after the first thread submits the first task corresponding to the first instruction sequence to the first algorithm IP core, the first task is started to be executed by the first algorithm IP core; at this time, under the condition that a third task corresponding to a third instruction sequence waits for execution, determining a third thread and a third algorithm IP core corresponding to the third instruction sequence, switching a working thread from the first thread to the third thread, reading the third instruction sequence from the first instruction cache queue by the third thread, calling the third thread to execute processing corresponding to the third algorithm IP core on the third instruction sequence, namely analyzing the third instruction sequence by the third thread, submitting the third task corresponding to the third instruction sequence to the third algorithm IP core, and simultaneously executing a function corresponding to the third algorithm IP core based on a single CPU core of a processor by a driver of the third algorithm IP core.
In the embodiment of the application, concurrent execution refers to simultaneous execution of multiple tasks or threads in the HSM based on a single CPU core in the same time interval. Parallel execution refers to the simultaneous execution of multiple tasks or threads in an HSM.
In some embodiments, the above-mentioned "switching the working thread from the first thread to the third thread corresponding to the third instruction sequence in the case that the first thread submits the first task corresponding to the first instruction sequence to the first algorithm IP core" may be implemented in a manner one or a manner two:
In a first mode, when the first thread waits for the first task to be executed successfully by the first algorithm IP core and detects that a third task corresponding to the third instruction sequence waits for execution, the working thread is switched from the first thread to a third thread corresponding to the third instruction sequence.
In the embodiment of the application, after a first thread reads a first instruction sequence from a first instruction cache queue, the first instruction sequence is analyzed, then the first thread submits a first task corresponding to the first instruction sequence to a first algorithm IP core for execution, and the first task is executed by the first algorithm IP core until success. When the first thread detects that the third thread waits to execute a third task corresponding to a third instruction sequence while the first thread waits for the first algorithm IP core to execute the first task until success, the working thread is switched from the first thread to the third thread, and the CPU gives up the execution of the first task and actively switches to the execution of the third task.
In a second mode, when a third task corresponding to the third instruction sequence waits for execution, the first thread submits the first task corresponding to the first instruction sequence to the first algorithm IP core, then reduces the priority of the first thread, and takes the third thread corresponding to the third instruction sequence as a working thread.
In the embodiment of the application, after the first thread submits the first task corresponding to the first instruction sequence to the first algorithm IP core, and under the condition that the third thread waits to execute the third task corresponding to the third instruction sequence, the first thread reduces the priority of the first thread, so that the priority of the first thread is lower than that of the third thread, the working thread is passively switched from the first thread to the third thread, the third thread reads the third instruction sequence from the first instruction cache queue, and the processing corresponding to the third algorithm IP core is executed on the third instruction sequence.
Therefore, under the condition that the CPU faces at least two instructions to be executed at the same time or receives the instructions to be executed in the execution process of the first instruction sequence, the priority of the thread is reduced, or the thread is switched during the period that the thread waits for executing the task, the task execution efficiency is improved, the service amount of the password resource which can be provided by the HSM algorithm system in unit time is further effectively increased, and the effective utilization rate of the CPU is improved.
The method for scheduling the password resources provided by the embodiment of the application can be applied to the following scenes:
Scene one: under the condition that the CPU is a four-core CPU, a first thread pool comprises a thread 1, a thread 2, a thread 3, a thread 4, a thread 5 and a thread 6, wherein the thread 1 to the thread 6 serve the instruction scheduler 1, when the HSM algorithm system receives the instruction data 1, the instruction data 2, the instruction data 3 and the instruction data 4, a corresponding first instruction processing object is found according to the type of an instruction so as to determine that the instruction corresponds to the first instruction scheduler, and at the moment, 4 idle threads in the threads 1 to 6 included in the first thread pool are determined to read the instruction data 1 to 4 respectively; then determining 4 idle algorithm IP cores from algorithm IP cores 1 to 6 included in a first password resource pool corresponding to the first instruction dispatcher; during instruction data processing, instruction data 1 is added into a first instruction cache queue, a thread 1 is determined from the first thread pool, an algorithm IP core 1 is determined from 4 idle algorithm IP cores in the first password resource pool, then the thread 1 reads the instruction data 1 from the first instruction cache queue, and a CPU core 1 calls the thread 1 to execute processing corresponding to the algorithm IP core 1 on the instruction data 1; meanwhile, instruction data 2 is added into a first instruction cache queue, a thread 2 is determined from the first thread pool, an algorithm IP core 2 is determined from the first password resource pool, then the thread 2 reads the instruction data 2 from the first instruction cache queue, and the CPU core 2 calls the thread 2 to execute corresponding processing of the algorithm IP core 2 on the instruction data 2; meanwhile, the instruction data 3 is added to a first instruction cache queue, a thread 3 is determined from the first thread pool, an algorithm IP core 3 is determined from the first password resource pool, then the thread 3 reads the instruction data 3 from the first instruction cache queue, the CPU core 3 calls the thread 3 to execute the corresponding processing of the algorithm IP core 3 on the instruction data 3, and the instruction data 4 is similar and is not repeated herein. Thus, the N CPU cores can execute the functions of N algorithm IP cores based on N threads at the same time in the scene.
In a second scenario, when the CPU is a dual-core CPU, the first thread pool includes a thread 1, a thread 2, a thread 3, and a thread 4, where the thread 1 to the thread 4 serve the instruction scheduler 1, and when the HSM algorithm system receives the instruction data 1, the instruction data 2, and the instruction data 3, a corresponding first instruction processing object is found according to the type of the instruction to determine that the instruction corresponds to the first instruction scheduler, and at this time, it is determined that 3 idle threads in the threads 1 to 4 included in the first thread pool read the instruction data 1 to the instruction data 3; then determining 3 idle algorithm IP cores from algorithm IP cores 1 to 4 included in a first password resource pool corresponding to the first instruction dispatcher; during instruction data processing, instruction data 1 is added into a first instruction cache queue, a thread 1 is determined from the first thread pool, an algorithm IP core 1 is determined from 3 idle algorithm IP cores in the first password resource pool, then the thread 1 reads the instruction data 1 from the first instruction cache queue, and a CPU core 1 calls the thread 1 to execute processing corresponding to the algorithm IP core 1 on the instruction data 1; meanwhile, adding instruction data 2 to a first instruction cache queue, determining a thread 2 from the first thread pool, determining an algorithm IP core 2 from the first password resource pool, then reading the instruction data 2 from the first instruction cache queue by a first instruction dispatcher, calling the thread 2 to execute processing corresponding to the algorithm IP core 2 by the CPU core 2, submitting a task corresponding to the instruction data 2 to the algorithm IP core 2 by the thread 2, determining a thread 3 corresponding to the instruction data 3 column and the algorithm IP core 3 under the condition that the task corresponding to the instruction data 3 waits to be executed, switching a working thread from the thread 2 to the thread 3, reading the instruction data 3 from the first instruction cache queue by the thread 3, and simultaneously calling the processing corresponding to the algorithm IP core 3 by the thread 3. Thus, in the present scenario, it is realized that D CPU cores simultaneously execute the functions of M algorithm IP cores based on M threads, where D is smaller than M.
In some embodiments, the chip-based cryptographic resource scheduling method provided in the foregoing embodiments may further include the following:
receiving a fourth password resource service request, wherein the fourth password resource service request at least comprises a fourth instruction sequence;
determining a second instruction dispatcher corresponding to the fourth instruction sequence, wherein the second instruction dispatcher is associated with a second instruction cache queue, a second thread pool and a second password resource pool; adding the fourth instruction sequence to a second instruction cache queue, and determining a fourth thread from the second thread pool, wherein the second thread pool at least comprises two threads; determining a fourth algorithm IP core corresponding to the fourth instruction sequence from the second password resource pool, wherein the second password resource pool at least comprises two algorithm IP cores; and reading the fourth instruction sequence from the second instruction cache queue, and calling the fourth thread to execute the processing corresponding to the fourth algorithm IP core on the fourth instruction sequence.
In the embodiment of the present application, the fourth instruction sequence does not correspond to the same instruction scheduler when any one of the first instruction sequence, the second instruction sequence, and the third instruction sequence is different.
In the embodiment of the application, the first instruction cache queue is responsible for caching the received first instruction sequence for the first instruction scheduler, and the second instruction cache queue is responsible for caching the received second instruction sequence for the second instruction scheduler, so as to respectively execute the functions of the corresponding algorithm IP cores. Here, the data to be executed is stored through the specific instruction cache queue, so that the function of the multi-CPU core for executing the multi-algorithm IP core based on multithreading in parallel is realized.
In the embodiment of the application, for two instruction sequences corresponding to the same instruction dispatcher, two different algorithm IP cores of the same kind are acquired based on the same instruction processing object to execute the parallel algorithm IP core function; for two instruction sequences corresponding to different instruction schedulers, determining the types of different algorithm IP cores based on different instruction processing objects, and executing the parallel algorithm IP core functions by the two algorithm IP cores of different types.
In the embodiment of the application, a fourth thread is determined from a second thread pool of a second instruction dispatcher corresponding to a fourth instruction sequence, a fourth data packet matched with the fourth instruction sequence is added to a second instruction cache queue, then a fourth algorithm IP core corresponding to the fourth instruction sequence is determined from a second password resource pool, and further the fourth instruction sequence is read from the second instruction cache queue through the fourth thread, and is processed based on a fourth algorithm IP core of a fourth line Cheng Caiyong. In the first instruction scheduling module, a first thread and a first algorithm IP core are respectively determined based on a first thread pool and a first password resource pool corresponding to a first instruction sequence, and functions corresponding to the first algorithm IP core are executed in the first thread; in the second instruction scheduling module, the HSM algorithm system respectively determines a fourth thread and a fourth algorithm IP core based on a second thread pool and a second password resource pool corresponding to a fourth instruction sequence, executes a function corresponding to the fourth algorithm IP core in the fourth thread, and outputs an execution result to the communication port through instruction processing objects respectively corresponding to the first instruction sequence and the fourth instruction sequence.
In the embodiment of the application, the number of the scheduling modules, the number of threads in the thread pool of each scheduling module and the number of the algorithm IP cores in the password resource pool of each scheduling module are determined according to the equipment requirements and the concurrent execution capacity of the system in the actual application process, and the method is not limited. Thus, parallel scheduling execution of different algorithm IP cores in the system can be realized to the greatest extent.
In some embodiments, as shown in fig. 6, a software class diagram of a chip-based cryptographic resource scheduling method suitable for use in the embodiments of the present application is provided, including an HSM management class 601, an instruction scheduling class 602, a thread pool class 603, an instruction processing class 604, an algorithm IP core management class 605, an instruction cache queue 606, a communication port class 607, and an encryption driving agent class 608, which are described below in connection with fig. 6:
HSM management class 601: the algorithm system controls the steps of the scheduling method based on the instruction scheduling class 602, the instruction processing class 604 and the communication port class 607;
The instruction scheduling class 602 at least comprises four schedulers, namely a SKE scheduler 6021, a PKE scheduler 6022, a HASH scheduler 6023 and a TRNG scheduler 6024, wherein each scheduler is responsible for scheduling instructions of respective algorithm IP cores;
the thread pool class 603 is responsible for acquiring cached instructions from an instruction cache queue in cooperation with an instruction scheduling class, and then invoking a method of the instruction processing class to process the instructions in the instruction cache queue based on a work thread 6031, wherein each instruction scheduler class cooperatively has a thread pool (not shown in the figure);
The instruction processing class 604 includes various instruction processing methods, each instruction corresponding to an instruction processing object; the instruction processing classes at least correspondingly comprise an asymmetric encryption algorithm symmetric encryption algorithm AES (Advanced Encryption Standard, AES) instruction processing 6041, an RSA (Rivest-Shamir-Adleman) instruction processing 6042, a HASH instruction processing 6043, a random number instruction processing 6044 and the like, wherein the AES instruction processing 6041 corresponds to the SKE scheduler 6021 and the RSA instruction processing 6042 corresponds to the PKE scheduler 6022 and the HASH instruction processing 6043 corresponds to the HASH scheduler 6023 and the random number instruction processing 6044 corresponds to the TRNG scheduler 6024.
The algorithm IP core management class 605 is responsible for managing the state of the algorithm IP core, and functions of the algorithm IP core are allocated, and an idle algorithm IP core needs to be determined from the algorithm IP core management class 605 before an instruction is executed; after the algorithm IP core is obtained, the instruction scheduling class 602 calls the encryption driving proxy class 608 corresponding to the algorithm IP core to execute the function corresponding to the algorithm (the instruction scheduling class 602 is not shown in the figure to call each encryption driving program corresponding to the algorithm IP core); the password resource pool which is responsible for management by the algorithm IP core management class 605 at least comprises four algorithm pools, namely a SKE algorithm IP core pool 6051, a PKE algorithm IP core pool 6052, a HASH algorithm IP core pool 6053 and a TRNG algorithm IP core pool 6054; wherein SKE algorithm IP core pool 6051 provides an algorithm IP core corresponding to SKE scheduler 6021, PKE algorithm IP core pool 6052 provides an algorithm IP core corresponding to PKE scheduler 6022, HASH algorithm IP core pool 6053 provides an algorithm IP core corresponding to HASH scheduler 6023, TRNG algorithm IP core pool 6054 provides an algorithm IP core corresponding to TRNG scheduler 6024.
The instruction cache queue 606 exists in association with the instruction scheduling class 602 and is responsible for caching received instructions for the instruction scheduling class 602;
The communication port 607 is responsible for receiving and transmitting instruction data and at least comprises two communication ports, namely uart6071 and mailbox 6072;
The cryptographic driver proxy class 608 is responsible for providing the instruction processing class 604 with a corresponding cryptographic driver to perform the functions corresponding to the algorithmic IP core.
In the embodiment of the present application, after the HSM management class 601 receives a cryptographic resource service request (corresponding to the first cryptographic resource service request in the embodiment of the present application) through the communication port class 607, the HSM management class 601 determines an instruction processing object in the instruction processing class 604 through instruction data in the cryptographic resource service request, and finds a corresponding instruction scheduler in the instruction scheduling class 602, and the instruction scheduler determines an idle thread from the thread pool class 603; meanwhile, the HSM management class 601 adds the initialized instruction data to the instruction cache queue 606 of the instruction scheduling class 602, then reads the instruction data in the instruction cache queue 606 by an idle thread, further determines an algorithm IP core corresponding to the password resource service request from the algorithm IP core management class 605 by the instruction processing class 604, invokes the function of an encryption driver execution algorithm IP core in the encryption driving agent class 608, completes password resource scheduling, and finally outputs an execution result to the CPU by the instruction processing class 604 through the communication port class 607.
The embodiment of the application provides a chip-based password resource scheduling method, as shown in fig. 7, comprising the following steps:
s701, a CPU receives instruction data from a mailbox;
s702, the communication port class reports the received instruction data to the HSM management class;
s703, the HSM management class finds out a corresponding instruction processing object from the instruction processing class through the serial number of the instruction;
s704, returning the found instruction processing object to the HSM management class by the instruction processing class;
s705, the HSM management class obtains the serial number of the characterization instruction scheduling class from the found instruction processing object;
S706, the HSM management class determines a corresponding instruction scheduling object from the instruction scheduling class according to the serial number of the characterization instruction scheduling class;
S707, the instruction scheduling class returns the determined instruction scheduling object to the HSM management class;
S708, the HSM management class puts the initialized data into an instruction cache queue of an instruction scheduling object;
S709, the instruction scheduling class informs the corresponding thread pool;
S710, determining an idle thread from the thread pool class by the instruction scheduling object;
s711, waking up the determined idle thread;
The awakened idle thread can be regarded as a working thread class;
s712, the awakened thread reads instruction data from the instruction cache queue;
S713, the awakened thread calls a method of the instruction processing class;
S714, the instruction processing class determines an idle algorithm IP core from the algorithm core management class;
s715, the algorithm core management class returns the determined algorithm IP core to the instruction processing class;
S716, processing the instruction by adopting a method corresponding to the instruction processing object in the instruction processing class;
s717, the algorithm core informs the instruction processing class of the processing result;
S718, the instruction processing class informs the mailbox of the processing result.
In the embodiment of the present application, the modules identified by the solid lines shown in fig. 7 are hardware modules, and the modules identified by the dashed lines are software modules implemented on the HSM.
Based on the foregoing embodiments, an embodiment of the present application provides an electronic device, as shown in fig. 8, which may include: a receiving unit 81, a first determining unit 82, a selecting unit 83, a second determining unit 84, and an executing unit 85, wherein:
a receiving unit 81, configured to receive a first cryptographic resource service request, where the first cryptographic resource service request includes at least a first instruction sequence;
A first determining unit 82, configured to determine a first instruction scheduler corresponding to the first instruction sequence, where the first instruction scheduler is associated with a first instruction cache queue, a first thread pool, and a first password resource pool;
A selection unit 83, configured to add the first instruction sequence to the first instruction cache queue, and determine a first thread from the first thread Cheng Chizhong, where the first thread pool includes at least two threads;
a second determining unit 84, configured to determine a first algorithm IP core corresponding to the first instruction sequence from the first cryptographic resource pool, where the first cryptographic resource pool includes at least two algorithm IP cores;
And the execution unit 85 is configured to read the first instruction sequence from the first instruction cache queue, and call the first thread to execute the processing corresponding to the first algorithm IP core on the first instruction sequence.
In other embodiments of the present application, the first determining unit 82 is further configured to perform the following:
Determining a first instruction processing object corresponding to the first instruction sequence based on the instruction type of the first instruction sequence;
A first number characterizing the first instruction scheduler is obtained based on the first instruction processing object, and the first instruction scheduler is determined based on the first number.
In other embodiments of the present application, the selecting unit 83 is further configured to perform the following:
determining a first data packet matched with the first instruction sequence, wherein the first data packet comprises the first instruction sequence and execution data corresponding to the first instruction;
The first data packet is added to the first instruction cache queue.
In other embodiments of the present application, the second determining unit 84 is further configured to perform the following:
Determining at least one idle algorithm IP core in the first cryptographic resource pool;
and determining a first algorithm IP core corresponding to the first instruction sequence from the at least one idle algorithm IP core.
In other embodiments of the present application, the execution unit 85 is further configured to execute the following:
invoking a first instruction processing object corresponding to the first instruction sequence based on the first thread;
And calling the first algorithm IP to check the first instruction sequence for processing based on the first instruction processing object.
In other embodiments of the application, the electronic device is further configured to perform the following:
receiving a second cryptographic resource service request, the second cryptographic resource service request comprising at least a second sequence of instructions, the second sequence of instructions corresponding to the first instruction scheduler;
adding the second instruction sequence to the first instruction cache queue and determining a second thread from the first line Cheng Chizhong;
determining a second algorithm IP core corresponding to the second instruction sequence from the first password resource pool;
and reading the second instruction sequence from the first instruction cache queue, and calling the second thread to execute the processing corresponding to the second algorithm IP core on the second instruction sequence.
In other embodiments of the application, the electronic device is further configured to perform the following:
Receiving a third cryptographic resource service request, the third cryptographic resource service request including at least a third sequence of instructions, the third sequence of instructions corresponding to the first instruction scheduler;
And under the condition that the first thread submits a first task corresponding to the first instruction sequence to the first algorithm IP core, switching a working thread from the first thread to a third thread corresponding to the third instruction sequence, reading the third instruction sequence from the first instruction cache queue, and calling the third thread to execute processing corresponding to the third algorithm IP core corresponding to the third instruction sequence on the third instruction sequence.
In other embodiments of the application, the electronic device is further configured to perform the following:
And switching the working thread from the first thread to a third thread corresponding to the third instruction sequence under the condition that the first thread detects that the third task corresponding to the third instruction sequence waits for execution while the first algorithm IP core successfully executes the first task.
In other embodiments of the application, the electronic device is further configured to perform the following:
And under the condition that a third task corresponding to the third instruction sequence waits to be executed, after the first thread submits the first task corresponding to the first instruction sequence to the first algorithm IP core, the priority of the first thread is reduced, and the third thread corresponding to the third instruction sequence is used as a working thread.
In other embodiments of the application, the electronic device is further configured to perform the following:
receiving a fourth password resource service request, wherein the fourth password resource service request at least comprises a fourth instruction sequence;
Determining a second instruction dispatcher corresponding to the fourth instruction sequence, wherein the second instruction dispatcher is associated with a second instruction cache queue, a second thread pool and a second password resource pool;
Adding the fourth instruction sequence to a second instruction cache queue, and determining a fourth thread from the second thread pool, wherein the second thread pool at least comprises two threads;
determining a fourth algorithm IP core corresponding to the fourth instruction sequence from the second password resource pool, wherein the second password resource pool at least comprises two algorithm IP cores;
And reading the fourth instruction sequence from the second instruction cache queue, and calling the fourth thread to execute the processing corresponding to the fourth algorithm IP core on the fourth instruction sequence.
The electronic equipment provided by the embodiment of the application can be directly connected to the server in the form of an expansion card or external equipment, and the purpose of parallel execution of functions by utilizing the multi-algorithm IP cores of the operating system multi-line Cheng Shixian in the scene related to the secret key is achieved, so that the performances of the algorithm IP cores and the whole HSM algorithm system are fully exerted.
Based on the foregoing embodiments, the present application provides an electronic device, which may perform, on a product side, an application of the electronic device provided in the foregoing embodiments to implement the cryptographic resource scheduling method provided in the foregoing embodiments, where, as shown in fig. 9, the security chip provided in the foregoing embodiments may be an HSM hardware device 92; the electronic device may include: a host CPU 91, an HSM hardware device 92, and a communication port 93; the host CPU includes at least an encryption service 911 for providing encryption services to an application layer of automotive electronics system architecture (AUTomotive Open System Architecture, AUTOSAR) software; an encryption driver 913 for implementing communication of the HSM and transmission and reception of the instruction; an encryption interface 912 for providing a unified interface to upper layers for encryption drive encapsulation.
The communication port 93 is used to realize communication connection between the host CPU 91 and the HSM hardware device 92;
When the host CPU runs the cryptographic resource scheduling program, the HSM hardware device 92 is controlled to execute the following steps of the chip-based cryptographic resource scheduling method:
Receiving a first password resource service request, wherein the first password resource service request at least comprises a first instruction sequence;
Determining a first instruction dispatcher corresponding to the first instruction sequence, wherein the first instruction dispatcher is associated with a first instruction cache queue, a first thread pool and a first password resource pool;
Adding the first instruction sequence to the first instruction cache queue and determining a first thread from the first thread Cheng Chizhong, the first thread pool including at least two threads;
determining a first algorithm IP core corresponding to the first instruction sequence from the first password resource pool, wherein the first password resource pool at least comprises two algorithm IP cores;
and reading the first instruction sequence from the first instruction cache queue, and calling the first thread to execute the processing corresponding to the first algorithm IP core on the first instruction sequence.
The electronic equipment provided by the embodiment of the application is applied to an HSM project conforming to the automobile electronic system architecture (AUTomotive Open System Architecture, AUTOSAR) standard, and can support asynchronous instructions, access algorithm IP cores in parallel and improve the competitiveness of safety products.
In the embodiment of the present application, the HSM hardware device in the electronic device 9 shown in fig. 9 may implement the functions of the receiving unit, the first determining unit, the selecting unit, the second determining unit, and the executing unit in the electronic device 8 shown in fig. 8.
Based on the foregoing embodiments, embodiments of the present application provide a storage medium storing one or more programs executable by one or more host CPUs to implement the steps of the chip-based cryptographic resource scheduling method provided by the foregoing embodiments.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described in terms of flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is not intended to limit the scope of the application, but is intended to cover any modifications, equivalents, and improvements within the spirit and principles of the application.

Claims (11)

1. A chip-based cryptographic resource scheduling method, characterized by being applied to a security chip, the method comprising:
Receiving a first password resource service request, wherein the first password resource service request at least comprises a first instruction sequence;
Determining a first instruction dispatcher corresponding to the first instruction sequence in different instruction dispatchers, wherein the first instruction dispatcher is associated with a first instruction cache queue, a first thread pool and a first password resource pool; wherein, the different instruction schedulers are associated with different instruction cache queues, different thread pools and different password resource pools;
Adding the first instruction sequence to the first instruction cache queue and determining a first thread from the first thread Cheng Chizhong, the first thread pool including at least two threads;
Determining at least one idle algorithm IP core from the first password resource pool, and selecting the idle algorithm IP core with the numbering sequence at the head position from the at least one idle algorithm IP core as a first algorithm IP core corresponding to the first instruction sequence according to the numbering sequence of the algorithm IP cores, wherein the first password resource pool at least comprises two algorithm IP cores;
Reading the first instruction sequence from the first instruction cache queue, and calling the first thread to execute the corresponding processing of the first algorithm IP core on the first instruction sequence;
Receiving a third cryptographic resource service request, the third cryptographic resource service request including at least a third sequence of instructions, the third sequence of instructions corresponding to the first instruction scheduler;
And under the condition that the first thread submits a first task corresponding to the first instruction sequence to the first algorithm IP core, switching a working thread from the first thread to a third thread corresponding to the third instruction sequence, reading the third instruction sequence from the first instruction cache queue, and calling the third thread to execute processing corresponding to the third algorithm IP core corresponding to the third instruction sequence on the third instruction sequence.
2. The chip-based cryptographic resource scheduling method of claim 1, wherein the determining a first instruction scheduler to which the first instruction sequence corresponds comprises:
Determining a first instruction processing object corresponding to the first instruction sequence based on the instruction type of the first instruction sequence;
A first number characterizing the first instruction scheduler is obtained based on the first instruction processing object, and the first instruction scheduler is determined based on the first number.
3. The chip-based cryptographic resource scheduling method of claim 1, wherein the invoking the first thread to execute the first algorithmic IP core corresponding process on the first sequence of instructions comprises:
invoking a first instruction processing object corresponding to the first instruction sequence based on the first thread;
And calling the first algorithm IP to check the first instruction sequence for processing based on the first instruction processing object.
4. The chip-based cryptographic resource scheduling method of claim 1, wherein the adding the first instruction sequence to the first instruction cache queue comprises:
determining a first data packet matched with the first instruction sequence, wherein the first data packet comprises the first instruction sequence and execution data corresponding to the first instruction;
The first data packet is added to the first instruction cache queue.
5. The chip-based cryptographic resource scheduling method according to any one of claims 1 to 4, wherein the method further comprises:
receiving a second cryptographic resource service request, the second cryptographic resource service request comprising at least a second sequence of instructions, the second sequence of instructions corresponding to the first instruction scheduler;
adding the second instruction sequence to the first instruction cache queue and determining a second thread from the first line Cheng Chizhong;
determining a second algorithm IP core corresponding to the second instruction sequence from the first password resource pool;
and reading the second instruction sequence from the first instruction cache queue, and calling the second thread to execute the processing corresponding to the second algorithm IP core on the second instruction sequence.
6. The method according to claim 5, wherein switching a worker thread from the first thread to a third thread corresponding to the third instruction sequence if the first thread submits a first task corresponding to the first instruction sequence to the first algorithmic IP core, comprises:
And switching the working thread from the first thread to a third thread corresponding to the third instruction sequence under the condition that the first thread detects that the third task corresponding to the third instruction sequence waits for execution while the first algorithm IP core successfully executes the first task.
7. The method according to claim 5, wherein switching a worker thread from the first thread to a third thread corresponding to the third instruction sequence if the first thread submits a first task corresponding to the first instruction sequence to the first algorithmic IP core, comprises:
And under the condition that a third task corresponding to the third instruction sequence waits to be executed, after the first thread submits the first task corresponding to the first instruction sequence to the first algorithm IP core, the priority of the first thread is reduced, and the third thread corresponding to the third instruction sequence is used as a working thread.
8. The chip-based cryptographic resource scheduling method according to any one of claims 1 to 4, wherein the method further comprises:
receiving a fourth password resource service request, wherein the fourth password resource service request at least comprises a fourth instruction sequence;
Determining a second instruction dispatcher corresponding to the fourth instruction sequence, wherein the second instruction dispatcher is associated with a second instruction cache queue, a second thread pool and a second password resource pool;
Adding the fourth instruction sequence to a second instruction cache queue, and determining a fourth thread from the second thread pool, wherein the second thread pool at least comprises two threads;
determining a fourth algorithm IP core corresponding to the fourth instruction sequence from the second password resource pool, wherein the second password resource pool at least comprises two algorithm IP cores;
And reading the fourth instruction sequence from the second instruction cache queue, and calling the fourth thread to execute the processing corresponding to the fourth algorithm IP core on the fourth instruction sequence.
9. An electronic device, comprising:
the first receiving unit is used for receiving a first password resource service request, and the first password resource service request at least comprises a first instruction sequence;
A first determining unit, configured to determine a first instruction scheduler corresponding to the first instruction sequence in different instruction schedulers, where the first instruction scheduler is associated with a first instruction cache queue, a first thread pool and a first password resource pool; wherein, the different instruction schedulers are associated with different instruction cache queues, different thread pools and different password resource pools;
A selection unit configured to add the first instruction sequence to the first instruction cache queue, and determine a first thread from the first thread Cheng Chizhong, where the first thread pool includes at least two threads;
The second determining unit is used for determining at least one idle algorithm IP core from the first password resource pool, selecting the idle algorithm IP core with the numbering sequence at the head position from the at least one idle algorithm IP core as a first algorithm IP core corresponding to the first instruction sequence according to the numbering sequence of the algorithm IP cores, wherein the first password resource pool at least comprises two algorithm IP cores;
The execution unit is used for reading the first instruction sequence from the first instruction cache queue and calling the first thread to execute the processing corresponding to the first algorithm IP core on the first instruction sequence;
A second receiving unit, configured to receive a third cryptographic resource service request, where the third cryptographic resource service request includes at least a third instruction sequence, and the third instruction sequence corresponds to the first instruction scheduler;
and the switching unit is used for switching the working thread from the first thread to a third thread corresponding to the third instruction sequence under the condition that the first thread submits the first task corresponding to the first instruction sequence to the first algorithm IP core, reading the third instruction sequence from the first instruction cache queue, and calling the third thread to execute the processing corresponding to the third algorithm IP core corresponding to the third instruction sequence on the third instruction sequence.
10. An electronic device is characterized by comprising a host central processing unit CPU, a security chip and a communication port;
The communication port is used for realizing communication connection between the host CPU and the security chip;
When the host CPU runs the cryptographic resource scheduling program, controlling the secure chip to implement the chip-based cryptographic resource scheduling method according to any one of claims 1 to 8.
11. A storage medium storing an executable computer program, wherein the executable computer program, when executed by a host CPU, implements the steps of the chip-based cryptographic resource scheduling method of any one of claims 1 to 8.
CN202410105814.5A 2024-01-25 2024-01-25 Chip-based password resource scheduling method, device and storage medium Active CN117633914B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410105814.5A CN117633914B (en) 2024-01-25 2024-01-25 Chip-based password resource scheduling method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410105814.5A CN117633914B (en) 2024-01-25 2024-01-25 Chip-based password resource scheduling method, device and storage medium

Publications (2)

Publication Number Publication Date
CN117633914A CN117633914A (en) 2024-03-01
CN117633914B true CN117633914B (en) 2024-05-10

Family

ID=90021999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410105814.5A Active CN117633914B (en) 2024-01-25 2024-01-25 Chip-based password resource scheduling method, device and storage medium

Country Status (1)

Country Link
CN (1) CN117633914B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103838631A (en) * 2014-03-11 2014-06-04 武汉科技大学 Multi-thread scheduling realization method oriented to network on chip
WO2017070900A1 (en) * 2015-10-29 2017-05-04 华为技术有限公司 Method and apparatus for processing task in a multi-core digital signal processing system
CN106844282A (en) * 2016-10-26 2017-06-13 安徽扬远信息科技有限公司 A kind of many IP kernel integrated approaches in crypto chip
CN108075882A (en) * 2016-11-14 2018-05-25 航天信息股份有限公司 Cipher card and its encipher-decipher method
US10929181B1 (en) * 2019-11-22 2021-02-23 Iterate Studio, Inc. Developer independent resource based multithreading module
CN114943087A (en) * 2022-05-25 2022-08-26 广州万协通信息技术有限公司 Multi-algorithm-core high-performance SR-IOV encryption and decryption system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116670644A (en) * 2020-09-12 2023-08-29 金辛格自动化有限责任公司 Interleaving processing method on general purpose computing core

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103838631A (en) * 2014-03-11 2014-06-04 武汉科技大学 Multi-thread scheduling realization method oriented to network on chip
WO2017070900A1 (en) * 2015-10-29 2017-05-04 华为技术有限公司 Method and apparatus for processing task in a multi-core digital signal processing system
CN106844282A (en) * 2016-10-26 2017-06-13 安徽扬远信息科技有限公司 A kind of many IP kernel integrated approaches in crypto chip
CN108075882A (en) * 2016-11-14 2018-05-25 航天信息股份有限公司 Cipher card and its encipher-decipher method
US10929181B1 (en) * 2019-11-22 2021-02-23 Iterate Studio, Inc. Developer independent resource based multithreading module
CN114943087A (en) * 2022-05-25 2022-08-26 广州万协通信息技术有限公司 Multi-algorithm-core high-performance SR-IOV encryption and decryption system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
密码芯片的多算法随机作业流调度方法;李莉 等;《通信学报》;20161225(第12期);第86-94页 *

Also Published As

Publication number Publication date
CN117633914A (en) 2024-03-01

Similar Documents

Publication Publication Date Title
US10467725B2 (en) Managing access to a resource pool of graphics processing units under fine grain control
CN109783229B (en) Thread resource allocation method and device
US8478926B1 (en) Co-processing acceleration method, apparatus, and system
US7155551B2 (en) Hardware semaphore intended for a multi-processor system
CN109388338B (en) Hybrid framework for NVMe-based storage systems in cloud computing environments
CN106354687B (en) Data transmission method and system
CN104994032B (en) A kind of method and apparatus of information processing
CN106571978B (en) Data packet capturing method and device
CN104102548A (en) Task resource scheduling processing method and task resource scheduling processing system
US8849905B2 (en) Centralized computing
CN112905342B (en) Resource scheduling method, device, equipment and computer readable storage medium
CN110866262A (en) Asynchronous encryption and decryption system and method with cooperative work of software and hardware
CN116132420B (en) Cluster password acceleration method and device for universal Internet platform
CN115048679B (en) Multi-service partition isolation chip integrating in-chip safety protection function
US11531566B2 (en) Safe and secure communication network message processing
CN112799851B (en) Data processing method and related device in multiparty security calculation
KR100799305B1 (en) High-Performance Cryptographic Device using Multiple Ciphercores and its Operation Method
CN117633914B (en) Chip-based password resource scheduling method, device and storage medium
CN111158782B (en) DPDK technology-based Nginx configuration hot update system and method
CN111459871A (en) FPGA heterogeneous computation based block chain acceleration system and method
CN113010464A (en) Data processing apparatus and device
CN114696996B (en) Hardware device for encrypting and decrypting based on multiple symmetric algorithms and multiple masters
Wu et al. Dynamic kernel/device mapping strategies for gpu-assisted hpc systems
CN113722104B (en) Vehicle-mounted domain controller chip system and method for improving safety of vehicle-mounted domain controller
CN117806802A (en) Task scheduling method based on containerized distributed system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant