CN113722103A - Encryption card calling control method and communication equipment - Google Patents

Encryption card calling control method and communication equipment Download PDF

Info

Publication number
CN113722103A
CN113722103A CN202111062763.5A CN202111062763A CN113722103A CN 113722103 A CN113722103 A CN 113722103A CN 202111062763 A CN202111062763 A CN 202111062763A CN 113722103 A CN113722103 A CN 113722103A
Authority
CN
China
Prior art keywords
data
thread
encryption
decryption
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111062763.5A
Other languages
Chinese (zh)
Inventor
樊俊诚
王阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qianxin Technology Group Co Ltd
Secworld Information Technology Beijing Co Ltd
Original Assignee
Qianxin Technology Group Co Ltd
Secworld Information Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qianxin Technology Group Co Ltd, Secworld Information Technology Beijing Co Ltd filed Critical Qianxin Technology Group Co Ltd
Priority to CN202111062763.5A priority Critical patent/CN113722103A/en
Publication of CN113722103A publication Critical patent/CN113722103A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5011Pool
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a calling control method of an encryption card and communication equipment, wherein the method is applied to the communication equipment, the communication equipment comprises a plurality of processor cores, each processor core is respectively and correspondingly provided with a thread pool, each thread pool comprises a plurality of encryption and decryption threads, each encryption and decryption thread is respectively corresponding to a processing unit in the encryption card, and the method comprises the following steps: after the communication equipment receives first data, judging whether the first data meets the conditions of encryption and decryption processing; and starting an encryption and decryption thread in the thread pool to call a processing unit corresponding to the encryption and decryption thread to perform first processing on the first data to obtain second data under the condition that the first data meets the condition of encryption and decryption processing.

Description

Encryption card calling control method and communication equipment
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method for controlling invocation of an encryption card and a communications device.
Background
In order to ensure the security of the transmitted data, a specific encryption and decryption algorithm is usually adopted when the data is transmitted in a networking manner. At present, an encryption card is usually arranged in communication equipment, an encryption algorithm and a decryption algorithm are configured in the encryption card, and then encryption and decryption processing of data is realized in the communication equipment by calling the encryption card.
At present, when data needs to be encrypted and decrypted in communication equipment, an encryption card is called by switching threads to realize the encryption and decryption of the data. However, at this time, the communication device may interrupt the thread called by the communication device before, so that the service such as network forwarding executed by the thread is interrupted, and the performance of the service is unstable.
Disclosure of Invention
In view of this, the present application provides a method for controlling invocation of an encryption card and a communication device, so as to solve the technical problem of unstable network forwarding performance when the performance of the encryption card is maximized in the prior art.
The application provides a calling control method of an encryption card, which is applied to communication equipment, wherein the communication equipment comprises a plurality of processor cores, each processor core is respectively and correspondingly provided with a thread pool, each thread pool comprises a plurality of encryption and decryption threads, and each encryption and decryption thread is respectively corresponding to a processing unit in the encryption card, and the method comprises the following steps:
after the communication equipment receives first data, judging whether the first data meets the conditions of encryption and decryption processing;
and starting an encryption and decryption thread in the thread pool to call a processing unit corresponding to the encryption and decryption thread to perform first processing on the first data to obtain second data under the condition that the first data meets the condition of encryption and decryption processing.
Preferably, before starting the encryption and decryption threads in the thread pool, the method further includes:
judging whether any one of the following calling conditions is met:
the target thread is in an idle state; the target thread is a main thread for data transmission in the communication equipment;
or the like, or, alternatively,
the target time length reaches a time length threshold value, and the target time length is the duration from the last time the encryption and decryption thread is started to the current time;
or the like, or, alternatively,
the accumulated quantity of the first data reaches a preset quantity threshold;
in the case where any of the call conditions is satisfied, performing the following: and starting the encryption and decryption threads in the thread pool.
In the above method, preferably, each thread pool is provided with a corresponding forwarding queue, and the forwarding queue is used to store a data identifier corresponding to the first data;
and each encryption and decryption thread is provided with a corresponding thread queue, and the data identifiers in the forwarding queues are distributed to each thread queue under the condition that the accumulated quantity of the data identifiers in the forwarding queues reaches a preset accumulated threshold value, so that the encryption and decryption threads call corresponding processing units to perform first processing on first data corresponding to the data identifiers in the thread queues.
In the method, preferably, the data identifier corresponding to the first data is stored in the forwarding queue according to an order identifier corresponding to the first data, and the order identifier represents a data flow to which the first data belongs, so that the data identifiers corresponding to the first data belonging to the same data flow are stored in the same forwarding queue.
In the above method, preferably, the sequence identifier further characterizes whether the first data satisfies a condition of encryption and decryption processing;
wherein the determining whether the first data meets the condition of encryption and decryption processing includes:
judging whether the order identification corresponding to the first data is matched with a preset data stream identification or not; the data stream identifier is the identifier of the data stream needing encryption and decryption;
if the sequence identification corresponding to the first data is matched with the data stream identification, the first data is represented to meet the condition of encryption and decryption processing; and if the sequence identification corresponding to the first data does not match with the data flow identification, the first data is represented not to meet the condition of encryption and decryption processing.
In the foregoing method, preferably, the storage space in the forwarding queue is equal to the sum of the storage spaces of the thread queues corresponding to the forwarding queue.
In the above method, preferably, the encryption and decryption threads in the thread pool have thread numbers, and the thread numbers represent thread orders between the encryption and decryption threads;
before starting the encryption and decryption threads in the thread pool, the method further comprises the following steps:
and sequentially distributing the data identifiers in the forwarding queue to the corresponding thread queues corresponding to each encryption and decryption thread in a polling mode according to the thread numbers, so that the encryption and decryption threads call the corresponding processing units to perform first processing on first data corresponding to the data identifiers in the thread queues.
In the above method, preferably, the invoking of the encryption and decryption thread by the corresponding processing unit to perform first processing on first data corresponding to the data identifier in the thread queue includes:
the encryption and decryption thread reads first data corresponding to the data identification in a storage area according to the data identification in the thread queue;
the encryption and decryption thread sends the first data to a processing unit corresponding to the encryption and decryption thread, so that the processing unit performs encryption processing or decryption processing on the received first data to obtain second data;
and the encryption and decryption thread writes the second data sent by the processing unit into the storage area according to the corresponding data identifier of the second data in the thread queue.
The above method, preferably, further comprises:
outputting the second data after the processing completion message is obtained;
and the processing completion message is generated after all the encryption and decryption threads in the thread pool call corresponding processing units to process first data corresponding to all the data identifications in the thread queue.
The method preferably further comprises, before outputting the second data:
and converting the second data into a data format according to a preset target format to obtain the second data in the target format.
In the above method, preferably, the encryption and decryption thread is in a sleep state before being started; and the encryption and decryption thread enters a sleep state after the processing unit is called.
In the above method, preferably, the thread pool is created based on the detected number of processor cores when the communication device is started.
The present application also provides a communication device, comprising:
one or more processors;
a memory having a computer program stored thereon;
the computer program, when executed by the one or more processors, causes the one or more processors to implement a method of call control for an encryption card as described in any of the above.
According to the scheme, in the encryption card calling control method and the communication device, the corresponding encryption and decryption threads are created for the processor core in the communication device and correspond to the processing units in the encryption card, and based on the encryption and decryption threads, the corresponding encryption and decryption threads are started to call the corresponding processing units to process the first data under the condition that the communication device receives the first data and the first data meet the conditions of encryption and decryption processing, so that the second data is obtained. Therefore, the corresponding threads are created aiming at the processor core, so that encryption and decryption processing can be realized without switching the corresponding threads of normal network forwarding in the communication equipment to call the processing unit, and the performance maximization of the multiple processing units of the encryption card can be realized under the condition of not interfering with data packet forwarding.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of a method for call control of an encryption card according to an embodiment of the present application;
FIGS. 2-4 are exemplary diagrams of embodiments of the present application, respectively;
fig. 5 is another flowchart of a method for controlling invocation of an encryption card according to an embodiment of the present application;
FIGS. 6-9 are diagrams of another example of an embodiment of the present application, respectively;
fig. 10 is a partial flowchart of a method for call control of an encryption card according to an embodiment of the present application;
FIG. 11 is another illustration of an embodiment of the present application;
fig. 12 is another flowchart of a method for controlling invocation of an encryption card according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a communication device according to an embodiment of the present application;
fig. 14 and 15 are other exemplary diagrams of the embodiment of the present application.
Detailed Description
The inventor of the present application, when studying a communication device using an encryption card, found that: it is assumed that an encryption card is installed on a communication device, and the encryption card is generally composed of a plurality of hardware channels, where the hardware channels refer to cryptographic chips used for encryption and decryption processing in the encryption card. Since the performance of each hardware channel is not particularly high, it is necessary to use all hardware channels of the encryption card as much as possible to maximize the performance.
At present, in order to maximize the performance of using the encryption card, a multithreading mode needs to be adopted to call the encryption card, the performance of the encryption card is maximized, an operating system in the communication equipment needs to call the encryption card and also forward network traffic, if a plurality of threads are started to call the encryption card, switching on a plurality of threads is brought to the system, a plurality of system switching can be generated, the switching is not controllable, the network forwarding performance is unstable, and the performance is suddenly high and suddenly low.
In view of the above drawbacks, the inventor of the present application proposes a realization technique for improving throughput of an encryption card by using a software thread pool to simulate a coroutine, and adopts the advantages of the coroutine that an operating system kernel of a communication device does not preempt scheduling, and a created process manages a collaborative "thread" which is self-managed in a user mode, does not participate in a central processing unit (cpu) time scheduling of the cpu by the operating system, and does not allocate time in a balanced manner. In a specific implementation scheme, by adopting a pre-allocated thread pool, the pre-allocated threads are in a non-preemptive mode, thread switching on a system cannot be generated, so that encryption and decryption tasks are not generated or normal network forwarding is not disturbed when the encryption and decryption tasks are processed, a CPU is completely occupied when the encryption and decryption are processed, a plurality of data packets are processed in batches all the time, and the performance of an encryption card is maximally utilized by using as many hardware encryption channels as possible, so that the optimal performance is achieved.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, an implementation flowchart of a method for call control of an encryption card provided in an embodiment of the present application is applied to a communication device, where the communication device is configured with the encryption card, the encryption card has a plurality of processing units, as shown in fig. 2, where the processing units refer to cryptographic chips in the encryption card, and can implement processing such as encryption and decryption of data, and implemented data processing channels may be referred to as encryption hardware channels. Moreover, the communication device comprises a plurality of processor cores, each processor core corresponds to a thread pool, each thread pool comprises a plurality of encryption and decryption threads, and each encryption and decryption thread corresponds to a processing unit in the encryption card. The technical scheme in the embodiment is mainly used for realizing the performance maximization of the multiple processing units of the encryption card under the condition of not interfering the forwarding of the data packet.
Specifically, the method in this embodiment may include the following steps:
step 101: monitoring whether first data is received on the communication device, if the first data is received, executing step 102, and if the first data is not received, continuing to execute step 101.
The first data is a data packet to be processed received by the communication device. The first data corresponds to a data identification and an order identification. The data identifier here may be an address pointer of the first data to be processed, the address pointer being used to read the data to be processed, i.e. the first data. The order identification characterizes the data flow to which the first data belongs. In a specific implementation, the sequence identifier may be represented by a five-tuple hash value, where the five-tuple may include: and the source end IP address, the destination end IP address, the source end port, the destination end port and the protocol information corresponding to the first data. In addition, the order identification can also be represented by hash values of other tuples.
It should be noted that a plurality of processor cores may be included in the communication device. For example, as shown in fig. 3, the communication device contains 4 processor cores, namely: CPU0, CPU1, CPU2, and CPU 3. And each processor core is correspondingly provided with a thread pool, each thread pool comprises a plurality of encryption and decryption threads, and one encryption and decryption thread corresponds to one processing unit in the encryption card. For example, as shown in fig. 3, the processor cores CPU10, CPU1, CPU2 and CPU3 correspond to thread pool 0, thread pool 1, thread pool 2 and thread pool 3, respectively, and each thread pool corresponds to one cryptographic chip, i.e., hardware channel, in the encryption card by each encryption and decryption thread, respectively. For example, as shown in FIG. 4, thread 0, thread 1, thread 2, and thread 3 in the thread pool 0 correspond to 4 hardware channels 0-3 in the encryption card in sequence, thread 0, thread 1, thread 2, and thread 3 in the thread pool 1 correspond to 4 hardware channels 4-7 in the encryption card in sequence, thread 0, thread 1, thread 2, and thread 3 in the thread pool 2 correspond to 4 hardware channels 8-11 in the encryption card in sequence, and thread 0, thread 1, thread 2, and thread 3 in the thread pool 3 correspond to 4 hardware channels 12-15 in the encryption card in sequence.
Step 102: and judging whether the first data meets the conditions of the encryption and decryption processing, if so, executing the step 103, and if not, returning to continue executing the step 101.
The conditions of the encryption and decryption processing may be: the sequence identifier corresponding to the first data is matched with a preset data stream identifier, where the data stream identifier is an identifier of a data stream that needs to be encrypted and decrypted. The data stream identifier is preset in the communication device according to the requirements. Based on this, in this embodiment, after the first data is received, it may be determined whether the sequence identifier corresponding to the first data matches a preset data stream identifier, if the sequence identifier corresponding to the first data matches the data stream identifier, it may be determined that the first data is data that needs to be encrypted and decrypted, that is, the first data satisfies a condition of encryption and decryption, and if the sequence identifier corresponding to the first data does not match the data stream identifier, it may be determined that the first data is not data that needs to be encrypted and decrypted, that is, the first data does not satisfy the condition of encryption and decryption. Based on this, in this embodiment, it may be determined whether the first data needs to be encrypted and decrypted according to the content of the source information and the destination information of the first data.
Step 103: and starting the encryption and decryption thread in the thread pool to call the processing unit corresponding to the encryption and decryption thread to perform first processing on the first data so as to obtain second data.
In a specific implementation, in this embodiment, the data identifier corresponding to the first data may be sent to the started encryption and decryption thread through an interface between the encryption and decryption thread, and thus, the started encryption and decryption thread may call the corresponding processing unit to perform the first processing on the first data corresponding to the received data identifier, so as to obtain the second data.
The first processing may be encryption processing, and at this time, the second data is encrypted data obtained by encrypting the first data; the first processed data may be decrypted, and in this case, the second data is decrypted by decrypting the first data.
It should be noted that the encryption and decryption thread in the thread pool is in a dormant state in a default state, and only when the processing unit needs to be called to perform the first processing, the encryption and decryption thread in the thread pool is restarted, that is, wakened, so that the data identifier of the first data to be processed is sent to the wakened encryption and decryption thread, and then the wakened encryption and decryption thread calls the corresponding processing unit to perform the corresponding first processing. Further, after the encryption and decryption thread finishes calling and processing, the encryption and decryption thread enters the dormant state again until being started next time.
According to the foregoing scheme, in the encryption card invocation control method provided in the embodiment of the present application, corresponding encryption/decryption threads are created for processor cores in communication equipment to correspond to the processing units in the encryption card, and based on this, when the communication equipment receives first data and the first data meets the conditions of encryption/decryption processing, the corresponding encryption/decryption threads are started to invoke the corresponding processing units to process the first data, so as to obtain second data. It can be seen that, in this embodiment, a corresponding thread is created for the processor core, so that encryption and decryption processing can be implemented without switching a thread corresponding to normal network forwarding in the communication device to call the processing unit, and thus, performance maximization of multiple processing units of the encryption card can be implemented without interfering with forwarding of data packets.
In one implementation, before starting the encryption and decryption threads in the thread pool in step 103, the method in this embodiment may further include the following steps, as shown in fig. 5:
step 104: and judging whether the calling condition is met, if so, executing the step 103, and if not, continuing to monitor whether the calling condition is met.
Wherein the condition for calling is satisfied includes that any one of the following conditions is satisfied, as follows:
the first call condition may be: the target thread is in an idle state, where the target thread may be a main thread for data transmission in the communication device. When the target thread is in the idle state, the amount of idle resources in the communication device is large, and therefore, when the first call condition is met, the encryption and decryption threads in the thread pool can be started to call the corresponding processing units to process the first data under the support of the large amount of idle resources, so that the processing efficiency is improved.
The second call condition may be: the target duration reaches the duration threshold, where the target duration is the duration from the last time the encryption/decryption thread was started to the current time. The time threshold may be preset according to the requirement, such as 100 milliseconds. The time threshold may be understood as a preset time window, that is, after the previous encryption and decryption thread is started and the processing unit is called to complete the first processing for one time, in this embodiment, timing is started, and then the received first data is accumulated, and when the time window is reached, that is, the target time length of timing reaches the time threshold, the encryption and decryption thread in the thread pool is started again to call the corresponding processing unit to perform batch processing on the accumulated first data, so as to avoid a situation of low efficiency caused by frequently starting the encryption and decryption thread.
The third call condition may be: and the accumulated amount of the first data reaches a preset amount threshold, and the accumulated amount is the total amount of the first data received from the time when the encryption and decryption thread is started last to the current time. That is to say, after the previous encryption and decryption thread is started and the processing unit is called to complete the first processing for one time, in this embodiment, the received first data is accumulated, and when the accumulated number reaches the number threshold, the encryption and decryption thread in the restart thread pool calls the corresponding processing unit to perform batch processing on the accumulated first data, so as to avoid the situation of low efficiency caused by frequently starting the encryption and decryption thread.
In a specific implementation, in this embodiment, a corresponding forwarding queue is respectively set for each thread pool, where the forwarding queue is used to store a data identifier corresponding to the first data, as shown in fig. 6, a thread pool 0 has a forwarding queue 0, a thread pool 1 has a forwarding queue 1, a thread pool 2 has a forwarding queue 2, and a thread pool 3 has a forwarding queue 3.
Furthermore, each encryption and decryption thread is provided with a corresponding thread queue, and under the condition that the accumulated number of the data identifiers in the forwarding queue reaches a preset accumulated threshold, the data identifiers in the forwarding queue are distributed to each thread queue, so that the encryption and decryption thread calls a corresponding processing unit to perform first processing on first data corresponding to the data identifiers in the thread queue.
The accumulation threshold may be an upper limit value of the capacity of the forwarding queue, or another value smaller than the upper limit value of the capacity of the forwarding queue.
Taking thread pool 0 as an example, as shown in fig. 7, encryption/decryption thread 0 in thread pool 0 has thread queue 0, encryption/decryption thread 1 has thread queue 1, encryption/decryption thread 2 has thread queue 2, and encryption/decryption thread 3 has thread queue 3. Based on this, when the accumulated number of the data identifiers stored in the forwarding queue 0 reaches the upper limit of the capacity of the forwarding queue 0, all the data identifiers in the forwarding queue 0 are distributed to the thread queue 0, the thread queue 1, the thread queue 2 and the thread queue 3, so that the encryption and decryption thread 0 is started and calls the hardware channel 0 to process the first data corresponding to the data identifiers in the thread queue 0, the encryption and decryption thread 1 calls the hardware channel 1 to process the first data corresponding to the data identifiers in the thread queue 1, the encryption and decryption thread 2 calls the hardware channel 2 to process the first data corresponding to the data identifiers in the thread queue 2, and the encryption and decryption thread 3 calls the hardware channel 3 to process the first data corresponding to the data identifiers in the thread queue 3.
It should be noted that, the forwarding queue corresponding to each thread pool stores the data identifier corresponding to the first data according to the sequence identifier corresponding to the first data, and the sequence identifier represents the data flow to which the first data belongs, so that the data identifiers corresponding to the first data belonging to the same data flow are stored in the same forwarding queue. As shown in fig. 8, data identifiers in the forwarding queue 0, the forwarding queue 1, the forwarding queue 2, and the forwarding queue 3 are stored according to data streams corresponding to the first data to which the data identifiers belong, and data streams corresponding to the first data to which the data identifiers stored in the forwarding queue 0, the forwarding queue 1, the forwarding queue 2, and the forwarding queue 3 belong may be different, and are respectively a data stream X, a data stream Y, a data stream Z, and a data stream R, but data streams corresponding to the first data to which the data identifiers stored in the same forwarding queue belong are the same.
In a specific implementation, the storage space in the forwarding queue is equal to the sum of the storage spaces of the thread queues corresponding to the forwarding queue. Therefore, when the accumulated number of the data identifications in the forwarding queue reaches the upper limit of the capacity of the forwarding queue, all the data identifications stored in the forwarding queue can be distributed to corresponding thread queues. For example, the memory space of forward queue 0 is consistent with the sum of the memory spaces of thread queue 0, thread queue 1, thread queue 2, and thread queue 3. Based on this, when the data id accumulation in the forwarding queue 0 reaches the upper limit 16 of the capacity of the forwarding queue 0, 16 data ids in the forwarding queue 0 are distributed to the thread queue 0, the thread queue 1, the thread queue 2, and the thread queue 3, and 4 data ids are distributed to each thread queue.
In one implementation, the encryption/decryption threads in the thread pool have thread numbers, such as encryption/decryption thread 0, encryption/decryption thread 1, encryption/decryption thread 2, and encryption/decryption thread 3, which characterize the thread order among the encryption/decryption threads and correspondingly characterize the queue order among the thread queues corresponding to the encryption/decryption threads, such as the thread order from encryption/decryption thread 0 to encryption/decryption thread 3, and further such as the queue order from thread queue 0 to thread queue 3.
Based on this, in this embodiment, before starting the encryption and decryption threads in the thread pool in step 103, when distributing the data identifier in the forwarding queue to the thread queue corresponding to each encryption and decryption thread in the thread pool, the following specific implementation may be performed:
and according to the thread number, sequentially distributing the data identifier in the forwarding queue to the thread queue corresponding to each encryption and decryption thread in a polling mode, so that the encryption and decryption threads call the corresponding processing unit to perform first processing on first data corresponding to the data identifier in the thread queue.
The polling mode is as follows: according to the thread sequence represented by the thread number, sending a first data identifier in the forwarding queue to a thread queue of a first encryption and decryption thread in a thread pool; then, sending the next data identifier in the forwarding queue to a thread queue of a second encryption and decryption thread in the thread pool; then, sending the next data identifier in the forwarding queue to a thread queue of a third encryption and decryption thread in the thread pool; until the next data identifier in the forwarding queue is sent to the thread queue of the last encryption and decryption thread in the thread pool; then, sending the next data identifier in the forwarding queue to a thread queue of a first encryption and decryption thread in a thread pool; and then, sending the next data identifier in the forwarding queue to the thread queue of the second encryption and decryption thread in the thread pool, and so on until the last data identifier in the forwarding queue is sent to the thread queue of the last encryption and decryption thread in the thread pool.
For example, as shown in fig. 9, data identifier 0 in forwarding queue 0 is sent to thread queue 0, data identifier 1 in forwarding queue 0 is sent to thread queue 1, data identifier 2 in forwarding queue 0 is sent to thread queue 2, and data identifier 3 in forwarding queue 0 is sent to thread queue 3; sending the data identifier 4 in the forwarding queue 0 to the thread queue 0, sending the data identifier 5 in the forwarding queue 0 to the thread queue 1, sending the data identifier 6 in the forwarding queue 0 to the thread queue 2, and sending the data identifier 7 in the forwarding queue 0 to the thread queue 3; by analogy, the data identifier 8 in the forwarding queue 0 is continuously sent to the thread queue 0, the data identifier 9 in the forwarding queue 0 is sent to the thread queue 1, the data identifier 10 in the forwarding queue 0 is sent to the thread queue 2, and the data identifier 11 in the forwarding queue 0 is sent to the thread queue 3; sending the data identifier 12 in the forwarding queue 0 to the thread queue 0, sending the data identifier 13 in the forwarding queue 0 to the thread queue 1, sending the data identifier 14 in the forwarding queue 0 to the thread queue 2, sending the data identifier 15 in the forwarding queue 0 to the thread queue 3 until the forwarding queue 0 is empty, and then calling the corresponding processing unit by each encryption and decryption thread to perform first processing on first data corresponding to the data identifier in each corresponding thread queue, thereby obtaining second data.
Based on the above implementation, when each encryption and decryption thread invokes a corresponding processing unit to perform first processing on first data corresponding to a data identifier in a thread queue, the implementation may specifically be implemented in the following manner, as shown in fig. 10:
step 1001: and the encryption and decryption thread reads first data corresponding to the data identification in the storage area according to the data identification in the thread queue.
Wherein the storage area may be a storage area in the communication device or may be a storage area in another device connected to the communication device. The data identifier may be an identifier capable of pointing to the first data, such as an address pointer. Based on the data identification, the encryption and decryption thread reads the first data corresponding to each data identification in the thread queue in the storage area.
Specifically, each encryption and decryption thread may read corresponding first data according to the sequence in which each data identifier in the thread queue is written; or each encryption and decryption thread can read the first data corresponding to each data identifier in the thread queue according to a random order; or, each encryption and decryption thread may read the first data corresponding to each data identifier in the thread queue at the same time.
For example, after the address pointers in the forwarding queue 0 are sent to the thread queue 0 to the thread queue 3 in the order from the encryption/decryption thread 0 to the encryption/decryption thread 3, the encryption/decryption thread 0 reads corresponding first data according to the address pointers in the thread queue 0, the encryption/decryption thread 1 reads corresponding first data according to the address pointers in the thread queue 1, the encryption/decryption thread 2 reads corresponding first data according to the address pointers in the thread queue 2, and the encryption/decryption thread 3 reads corresponding first data according to the address pointers in the thread queue 3.
Step 1002: the encryption and decryption thread sends the first data to a processing unit corresponding to the encryption and decryption thread, so that the processing unit performs encryption processing or decryption processing on the received first data to obtain second data.
The encryption and decryption thread may send the first data to the processing unit through a data interface or a data path between the encryption and decryption thread and the corresponding processing unit, and in addition, the encryption and decryption thread may also send other data to the corresponding processing unit together with the first data, where the other data is data required for processing, such as encryption and decryption, on the first data, such as a key.
Based on this, after the processing unit receives the first data, the processing unit performs processing such as encryption and decryption on the first data to obtain second data, and after obtaining the second data corresponding to the first data, the processing unit sends the second data to the encryption and decryption thread. For example, as shown in fig. 11, an encryption/decryption thread 0 sends read first data to a hardware channel 0, the hardware channel 0 performs encryption/decryption processing and sends obtained second data to the encryption/decryption thread 0, an encryption/decryption thread 1 sends the read first data to a hardware channel 1, the hardware channel 1 performs encryption/decryption processing and sends obtained second data to an encryption/decryption thread 1, an encryption/decryption thread 2 sends the read first data to a hardware channel 2, the hardware channel 2 performs encryption/decryption processing and sends obtained second data to an encryption/decryption thread 2, an encryption/decryption thread 3 sends the read first data to a hardware channel 3, and the hardware channel 3 performs encryption/decryption processing and sends obtained second data to an encryption/decryption thread 3.
Step 1003: and the encryption and decryption thread writes the second data sent by the processing unit into the storage area according to the corresponding data identification in the thread queue.
For example, the encryption/decryption thread 0 writes the second data sent by the hardware channel 0 into the storage location pointed by the corresponding address pointer in the thread queue 0 in the storage area, the encryption/decryption thread 1 writes the second data sent by the hardware channel 1 into the storage location pointed by the corresponding address pointer in the thread queue 1 in the storage area, the encryption/decryption thread 2 writes the second data sent by the hardware channel 2 into the storage location pointed by the corresponding address pointer in the thread queue 2 in the storage area, and the encryption/decryption thread 3 writes the second data sent by the hardware channel 3 into the storage location pointed by the corresponding address pointer in the thread queue 3 in the storage area.
Based on the above implementation, after the second data is obtained in the present embodiment, the method in the present embodiment may further include the following steps, as shown in fig. 12:
step 105: after the processing completion message is obtained, the second data is output.
And the processing completion message is generated after all encryption and decryption threads in the thread pool call corresponding processing units to process first data corresponding to all data identifications in the thread queue. That is to say, after each encryption and decryption thread in the thread pool receives the second data sent by the corresponding processing unit and writes the second data into the storage area, a processing completion message is generated to represent that each encryption and decryption thread in the thread pool completes the first processing by calling the processing unit, and at this time, the second data in the storage area can be output.
Further, in this embodiment, before outputting the second data, the second data may be read from the corresponding storage area according to the data identifier in the forwarding queue, and then the second data is subjected to data format conversion processing according to a preset target format to obtain the second data in the target format, and then the second data in the target format is output, for example, output to a corresponding processor core, and the processor core performs other processing on the second data, for example, output to the outside or internal calculation.
For example, after the second data is read, the second data is first subjected to data conversion according to an IPSec VPN (Internet Protocol Security Virtual Private Network) format, so that the second data in the IPSec VPN format is obtained, and then the second data is output. For example, in this embodiment, according to each address pointer in the forwarding queue 0, corresponding second data is read in the storage area of the communication device, then the second data is converted according to the IPsec VPN format, then the second data in the IPsec VPN format is output to the CPU0, and the CPU0 performs data calculation using the second data.
For another example, after the encryption/decryption thread 0-the encryption/decryption thread 3 respectively write the second data into the storage area according to the address pointers in the corresponding thread queues, the thread pool 0 generates a processing completion message to represent that the first data corresponding to the address pointers in the forwarding queue 0 has been encrypted or decrypted, further, after the second data is read according to the address pointers in the forwarding queue 0, the second data is converted according to the IPsec VPN format, and then the second data in the IPsec VPN format is output to the CPU 0; after the encryption and decryption threads 0-3 respectively write the second data into the storage area according to the address pointers in the corresponding thread queues, the thread pool 1 generates a processing completion message to represent that the first data corresponding to the address pointers in the forwarding queue 1 have been encrypted or decrypted, further, after the second data are read according to the address pointers in the forwarding queue 1, the second data are converted according to the IPsec VPN format, and then the second data in the IPsec VPN format are output to the CPU 1; after the encryption and decryption threads 0-3 respectively write the second data into the storage area according to the address pointers in the corresponding thread queues, the thread pool 2 generates a processing completion message to represent that the first data corresponding to the address pointers in the forwarding queue 2 have been encrypted or decrypted, further, after the second data are read according to the address pointers in the forwarding queue 2, the second data are converted according to the IPsec VPN format, and then the second data in the IPsec VPN format are output to the CPU 2; after the encryption and decryption threads 0 to 3 write the second data into the storage area according to the address pointers in the corresponding thread queues, the thread pool 3 generates a processing completion message to represent that the first data corresponding to the address pointers in the forwarding queue 3 have been encrypted or decrypted, and further, after the second data are read according to the address pointers in the forwarding queue 3, the second data are converted according to the IPsec VPN format, and then the second data in the IPsec VPN format are output to the CPU 3.
In a specific implementation, the thread pool is created based on the detected number of processor cores when the communication device is started. That is, the communication device creates a corresponding number of thread pools by detecting the number of processor cores involved at startup, for example, one processor core corresponds to one thread pool, and the total number of encryption and decryption threads in all the thread pools corresponds to the total number of processing units in the encryption card. For example, 4 thread pools are created according to the number 4 of processor cores, and then the corresponding 4 encryption/decryption threads are created in each thread pool according to the total number 16 of hardware channels in the encryption card, so that each hardware channel corresponds to one encryption/decryption thread.
Referring to fig. 13, a schematic structural diagram of a communication device provided in an embodiment of the present application is a configuration diagram of the communication device, where the communication device is configured with an encryption card, the encryption card has a plurality of processing units, as shown in fig. 2, where the processing units refer to cryptographic chips in the encryption card, and can implement processing such as encryption and decryption of data, and the implemented data processing channel may be referred to as an encryption hardware channel. Moreover, the communication device comprises a plurality of processor cores, each processor core is respectively corresponding to a thread pool, each thread pool comprises a plurality of encryption and decryption threads, and each encryption and decryption thread is respectively corresponding to a processing unit in the encryption card. The technical scheme in the embodiment is mainly used for realizing the performance maximization of the multiple processing units of the encryption card under the condition of not interfering the forwarding of the data packet.
Specifically, the communication device in this embodiment may include the following structure:
one or more processors 1301;
a memory 1302 having a computer program stored thereon;
the computer programs, when executed by the one or more processors 1301, cause the one or more processors to implement the method of:
after the communication equipment receives first data, judging whether the first data meets the conditions of encryption and decryption processing;
and starting an encryption and decryption thread in the thread pool to call a processing unit corresponding to the encryption and decryption thread to perform first processing on the first data to obtain second data under the condition that the first data meets the condition of encryption and decryption processing.
According to the above scheme, in the communication device, the corresponding encryption and decryption threads are created for the processor core and correspond to the processing units in the encryption card, and based on the corresponding encryption and decryption threads, when the communication device receives the first data and the first data meets the conditions of encryption and decryption processing, the corresponding encryption and decryption threads are started to call the corresponding processing units to process the first data, so that the second data is obtained. It can be seen that, in this embodiment, a corresponding thread is created for the processor core, so that encryption and decryption processing can be implemented without switching a thread corresponding to normal network forwarding in the communication device to call the processing unit, and thus, performance maximization of multiple processing units of the encryption card can be implemented without interfering with forwarding of data packets.
Taking communication equipment as a computer provided with a national-secret encryption card as an example, in order to improve throughput of a national-secret algorithm of the national-secret encryption card, as shown in fig. 3, each CPU (kernel) in the computer is bound with one forwarding process, assuming that 16 hardware channels of the national-secret encryption card are provided, each CPU is bound with 1 hardware channel, and if the hardware channels of the national-secret encryption card are 10Mbps, the total national-secret encryption and decryption algorithm of the computer can reach 160 Mbps.
As shown in fig. 4, the forwarding process in the kernel pre-allocates the number of corresponding threads (i.e., the encryption and decryption threads in the foregoing) according to the number of hardware channels corresponding to each CPU, each thread corresponds to a hardware channel of the encryption card one by one, the threads are set to a sleep state by default, and the forwarding process starts to wake up the threads in the thread pool after an encryption and decryption event occurs (i.e., after the forwarding process receives a data packet to be processed) to perform a task.
As shown in fig. 14 and 15: when data needs to be encrypted and decrypted, a forwarding process uses a data queue (order preserving queue, namely the forwarding queue in the foregoing) to store addresses of all data (mainly used for order preserving, since different hardware channels are called for encryption, data first-in first-out cannot be guaranteed, the forwarding process uses a queue alone to guarantee the order of the data), when the data addresses reach a certain number in the queue, the data addresses are distributed to thread queues (work queues) on various threads, after the distribution is completed, all threads in a thread pool are waken up to start working, and the forwarding process waits for task completion results of all threads; all threads call the hardware channel in sequence to carry out encryption and decryption until all threads finish tasks, and then a forwarding process is informed; and after the forwarding process obtains the message, sequentially taking out all the data for processing according to the order-preserving queue.
In summary, in the embodiment of the present application, a corresponding relationship between each core of the CPU and the hardware channel of the encryption card is pre-established, a corresponding relationship between a software coroutine stack (i.e., a thread pool) on a forwarding process running on each CPU and the hardware channel of the encryption card is established, a data list (order preserving use) on the forwarding process is established, a corresponding relationship between data of the forwarding process and each thread is allocated, a CPU piece calling thread is finally allocated to achieve an operation of calling the encryption card to implement encryption and decryption, and finally, a forwarding packet after encryption and decryption is processed. Therefore, the implementation technology for improving throughput of the national cryptographic algorithm by using the software simulation coroutine is provided, then multiple encryption channels of the national cryptographic card are utilized to the maximum extent under the condition of not interfering normal data packet forwarding, performance maximization is achieved, coroutine operation is simulated by using the software thread pool, and complexity of upper-layer codes is reduced.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (13)

1. A calling control method of an encryption card is applied to communication equipment, wherein the communication equipment comprises a plurality of processor cores, each processor core corresponds to a thread pool, each thread pool comprises a plurality of encryption and decryption threads, and each encryption and decryption thread corresponds to a processing unit in the encryption card respectively, and the method comprises the following steps:
after the communication equipment receives first data, judging whether the first data meets the conditions of encryption and decryption processing;
and starting an encryption and decryption thread in the thread pool to call a processing unit corresponding to the encryption and decryption thread to perform first processing on the first data to obtain second data under the condition that the first data meets the condition of encryption and decryption processing.
2. The method of claim 1, wherein prior to starting a cryptographic thread in the thread pool, the method further comprises:
judging whether any one of the following calling conditions is met:
the target thread is in an idle state; the target thread is a main thread for data transmission in the communication equipment;
or the like, or, alternatively,
the target time length reaches a time length threshold value, and the target time length is the duration from the last time the encryption and decryption thread is started to the current time;
or the like, or, alternatively,
the accumulated quantity of the first data reaches a preset quantity threshold;
in the case where any of the call conditions is satisfied, performing the following: and starting the encryption and decryption threads in the thread pool.
3. The method according to claim 1 or 2, wherein each thread pool is provided with a corresponding forwarding queue, and the forwarding queues are used for storing data identifiers corresponding to the first data;
and each encryption and decryption thread is provided with a corresponding thread queue, and the data identifiers in the forwarding queues are distributed to each thread queue under the condition that the accumulated quantity of the data identifiers in the forwarding queues reaches a preset accumulated threshold value, so that the encryption and decryption threads call corresponding processing units to perform first processing on first data corresponding to the data identifiers in the thread queues.
4. The method according to claim 3, wherein the data identifier corresponding to the first data is stored in the forwarding queue according to an order identifier corresponding to the first data, and the order identifier characterizes a data flow to which the first data belongs, so that the data identifiers corresponding to the first data belonging to the same data flow are stored in the same forwarding queue.
5. The method of claim 4, wherein the order identifier further characterizes whether the first data satisfies a condition for encryption and decryption processing;
wherein the determining whether the first data meets the condition of encryption and decryption processing includes:
judging whether the order identification corresponding to the first data is matched with a preset data stream identification or not; the data stream identifier is the identifier of the data stream needing encryption and decryption;
if the sequence identification corresponding to the first data is matched with the data stream identification, the first data is represented to meet the condition of encryption and decryption processing; and if the sequence identification corresponding to the first data does not match with the data flow identification, the first data is represented not to meet the condition of encryption and decryption processing.
6. The method of claim 3, wherein the storage space in the forwarding queue is equal to the sum of the storage spaces of the thread queues to which the forwarding queue corresponds.
7. The method of claim 3, wherein the encryption and decryption threads in the thread pool have thread numbers that characterize thread order among the encryption and decryption threads;
before starting the encryption and decryption threads in the thread pool, the method further comprises the following steps:
and sequentially distributing the data identifiers in the forwarding queue to the corresponding thread queues corresponding to each encryption and decryption thread in a polling mode according to the thread numbers, so that the encryption and decryption threads call the corresponding processing units to perform first processing on first data corresponding to the data identifiers in the thread queues.
8. The method of claim 7, wherein the invoking of the encryption/decryption thread by the corresponding processing unit for performing a first process on first data corresponding to the data identifier in the thread queue comprises:
the encryption and decryption thread reads first data corresponding to the data identification in a storage area according to the data identification in the thread queue;
the encryption and decryption thread sends the first data to a processing unit corresponding to the encryption and decryption thread, so that the processing unit performs encryption processing or decryption processing on the received first data to obtain second data;
and the encryption and decryption thread writes the second data sent by the processing unit into the storage area according to the corresponding data identifier of the second data in the thread queue.
9. The method of claim 7, further comprising:
outputting the second data after the processing completion message is obtained;
and the processing completion message is generated after all the encryption and decryption threads in the thread pool call corresponding processing units to process first data corresponding to all the data identifications in the thread queue.
10. The method of claim 9, wherein prior to outputting the second data, the method further comprises:
and converting the second data into a data format according to a preset target format to obtain the second data in the target format.
11. The method according to claim 1 or 2, wherein the encryption and decryption thread is in a sleep state before being started; and the encryption and decryption thread enters a sleep state after the processing unit is called.
12. The method of claim 1 or 2, wherein the thread pool is created based on a number of detected processor cores at startup of the communication device.
13. A communication device, comprising:
one or more processors;
a memory having a computer program stored thereon;
the computer program, when executed by the one or more processors, causes the one or more processors to implement the encryption card invocation control method of any of claims 1-12.
CN202111062763.5A 2021-09-10 2021-09-10 Encryption card calling control method and communication equipment Pending CN113722103A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111062763.5A CN113722103A (en) 2021-09-10 2021-09-10 Encryption card calling control method and communication equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111062763.5A CN113722103A (en) 2021-09-10 2021-09-10 Encryption card calling control method and communication equipment

Publications (1)

Publication Number Publication Date
CN113722103A true CN113722103A (en) 2021-11-30

Family

ID=78683257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111062763.5A Pending CN113722103A (en) 2021-09-10 2021-09-10 Encryption card calling control method and communication equipment

Country Status (1)

Country Link
CN (1) CN113722103A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114489867A (en) * 2022-04-19 2022-05-13 浙江大华技术股份有限公司 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium
CN114675863A (en) * 2022-05-27 2022-06-28 浙江大华技术股份有限公司 Algorithm configuration file updating method and related method, device, equipment and medium
WO2024114433A1 (en) * 2022-11-30 2024-06-06 苏州元脑智能科技有限公司 Video stream encryption configuration information synchronization method and apparatus, device, and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003069555A (en) * 2001-08-29 2003-03-07 Mitsubishi Electric Corp Encryption device and encryption/decryption processing method
CN101431407A (en) * 2008-12-15 2009-05-13 西安电子科技大学 Cipher processor supporting thread-level encryption and decryption and its cipher operation method
US20120230489A1 (en) * 2011-03-11 2012-09-13 Samsung Electronics Co. Ltd. Apparatus and method for short range communication in mobile terminal
CN102724035A (en) * 2012-06-15 2012-10-10 中国电力科学研究院 Encryption and decryption method for encrypt card
CN102843235A (en) * 2012-09-06 2012-12-26 汉柏科技有限公司 Message encrypting/decrypting method
CN107395452A (en) * 2017-06-22 2017-11-24 重庆大学 A kind of method for the HTTPS application performances that WebServer is improved using software-hardware synergism technology
CN110866262A (en) * 2019-11-05 2020-03-06 郑州信大捷安信息技术股份有限公司 Asynchronous encryption and decryption system and method with cooperative work of software and hardware

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003069555A (en) * 2001-08-29 2003-03-07 Mitsubishi Electric Corp Encryption device and encryption/decryption processing method
CN101431407A (en) * 2008-12-15 2009-05-13 西安电子科技大学 Cipher processor supporting thread-level encryption and decryption and its cipher operation method
US20120230489A1 (en) * 2011-03-11 2012-09-13 Samsung Electronics Co. Ltd. Apparatus and method for short range communication in mobile terminal
CN102724035A (en) * 2012-06-15 2012-10-10 中国电力科学研究院 Encryption and decryption method for encrypt card
CN102843235A (en) * 2012-09-06 2012-12-26 汉柏科技有限公司 Message encrypting/decrypting method
CN107395452A (en) * 2017-06-22 2017-11-24 重庆大学 A kind of method for the HTTPS application performances that WebServer is improved using software-hardware synergism technology
CN110866262A (en) * 2019-11-05 2020-03-06 郑州信大捷安信息技术股份有限公司 Asynchronous encryption and decryption system and method with cooperative work of software and hardware

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114489867A (en) * 2022-04-19 2022-05-13 浙江大华技术股份有限公司 Algorithm module scheduling method, algorithm module scheduling device and readable storage medium
WO2023201947A1 (en) * 2022-04-19 2023-10-26 Zhejiang Dahua Technology Co., Ltd. Methods, systems, and storage media for task dispatch
CN114675863A (en) * 2022-05-27 2022-06-28 浙江大华技术股份有限公司 Algorithm configuration file updating method and related method, device, equipment and medium
CN114675863B (en) * 2022-05-27 2022-10-04 浙江大华技术股份有限公司 Algorithm configuration file updating method and related method, device, equipment and medium
WO2024114433A1 (en) * 2022-11-30 2024-06-06 苏州元脑智能科技有限公司 Video stream encryption configuration information synchronization method and apparatus, device, and medium

Similar Documents

Publication Publication Date Title
CN113722103A (en) Encryption card calling control method and communication equipment
US20220030095A1 (en) Methods and apparatus for sharing and arbitration of host stack information with user space communication stacks
US10241830B2 (en) Data processing method and a computer using distribution service module
US9218203B2 (en) Packet scheduling in a multiprocessor system using inter-core switchover policy
US20210099391A1 (en) Methods and apparatus for low latency operation in user space networking
US7515596B2 (en) Full data link bypass
EP2215783B1 (en) Virtualised receive side scaling
EP2240852B1 (en) Scalable sockets
US20070168525A1 (en) Method for improved virtual adapter performance using multiple virtual interrupts
US20060195698A1 (en) Receive side scaling with cryptographically secure hashing
CN106571978B (en) Data packet capturing method and device
US8032658B2 (en) Computer architecture and process for implementing a virtual vertical perimeter framework for an overloaded CPU having multiple network interfaces
US7697434B1 (en) Method and apparatus for enforcing resource utilization of a container
US7363383B2 (en) Running a communication protocol state machine through a packet classifier
Kadloor et al. Scheduling with privacy constraints
US8149709B2 (en) Serialization queue framework for transmitting packets
CN112449012A (en) Data resource scheduling method, system, server and read storage medium
CN106941474B (en) Session initiation protocol server overload control method and server
CN113055292B (en) Method for improving forwarding performance of multi-core router and multi-core router
JP5262329B2 (en) Scheduling program, scheduling method, and scheduling apparatus
JP2005244417A (en) Band control unit, band control method, and band control program
CN115883257A (en) Password operation method and device based on security chip
CN114584346A (en) Log stream processing method, system, terminal device and storage medium
Habib et al. Authentication Based QoS Using Bandwidth Limitation
KR20060060530A (en) Method for processing network data with a priority scheduling in operating system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination