CN114390098A - Data transmission method and device, electronic equipment and storage medium - Google Patents

Data transmission method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114390098A
CN114390098A CN202011132819.5A CN202011132819A CN114390098A CN 114390098 A CN114390098 A CN 114390098A CN 202011132819 A CN202011132819 A CN 202011132819A CN 114390098 A CN114390098 A CN 114390098A
Authority
CN
China
Prior art keywords
data
cache
user
thread
data transmission
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011132819.5A
Other languages
Chinese (zh)
Inventor
邱海港
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kingsoft Cloud Network Technology Co Ltd
Original Assignee
Beijing Kingsoft Cloud Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kingsoft Cloud Network Technology Co Ltd filed Critical Beijing Kingsoft Cloud Network Technology Co Ltd
Priority to CN202011132819.5A priority Critical patent/CN114390098A/en
Publication of CN114390098A publication Critical patent/CN114390098A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Transfer Between Computers (AREA)

Abstract

The application provides a data transmission method, a data transmission device, electronic equipment and a storage medium, and belongs to the technical field of communication. The method comprises the following steps: extracting first data from a kernel cache, wherein the first data is data to be sent to a first user thread; storing the first data in a user cache; instructing the first user thread to extract the first data from the user cache. By adopting the technical scheme provided by the application, the problem of low data transmission efficiency in a high concurrency scene can be solved.

Description

Data transmission method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a data transmission method and apparatus, an electronic device, and a storage medium.
Background
When data transmission is carried out, threads can be established in both a data sending party and a data receiving party, and data transmission is realized through the threads. Specifically, when data is sent, a user thread of a data sender may store data to be sent in a kernel cache of the data sender, and then the data sender may send the data to a data receiver. When receiving data, the data receiver may store the received data in a kernel cache of the data receiver, and then a user thread waiting to acquire the data reads the data from the kernel cache.
Under a high concurrency scene, a data sender and a data receiver can transmit a large amount of data in a short time, and the data transmission mode is adopted for data transmission, so that the condition that the large amount of data occupies the kernel cache in the short time can occur, the data receiver cannot receive the data, or the data sender cannot send the data, and the data transmission efficiency is reduced.
Disclosure of Invention
An object of the embodiments of the present application is to provide a data transmission method, an apparatus, an electronic device, and a storage medium, so as to solve the problem of low data transmission efficiency in a high concurrency scenario. The specific technical scheme is as follows:
in a first aspect, a data transmission method is provided, where the method further includes:
extracting first data from a kernel cache, wherein the first data is data to be sent to a first user thread;
storing the first data in a user cache;
instructing the first user thread to extract the first data from the user cache.
Optionally, storing the first data in a user cache includes:
obtaining a user cache corresponding to the first user thread from one or more user caches;
and storing the first data into a user cache corresponding to the first user thread.
Optionally, storing the first data in a user cache includes:
correspondingly storing a thread identifier and the first data in the user cache, wherein the thread identifier is an identifier of the first user thread;
the instructing the first user thread to extract the first data from the user cache comprises:
and indicating the first user thread, and extracting the first data from the user cache according to the thread identification.
Optionally, before storing the first data in the user cache, the method further includes:
sending a user cache request to the equipment running with the first user thread, wherein the user cache request is used for requesting to allocate a cache space;
receiving response information of the device, wherein the response information indicates a storage position of a cache space allocated to the first user thread;
and determining the cache space indicated by the storage position as the user cache.
In a second aspect, a data transmission method is provided, where the method further includes:
extracting data transmission information from a user cache, wherein the data transmission information is used for representing data which is indicated to be sent by a second user thread, and the data transmission information is stored in the user cache by the second user thread;
acquiring second data identified by the data transmission information from a data storage space;
and storing the second data in a core cache, wherein the core cache is used for sending the second data.
Optionally, extracting data transmission information from the user cache includes:
detecting the storage amount occupied by the data transmission information in the user cache;
and under the condition that the storage amount reaches a preset threshold value, extracting the data transmission information from the user cache.
Optionally, storing the second data in the core cache includes:
under the condition that the data transmission information indicates a data receiver of the second data, searching a sub-cache region corresponding to the data receiver of the second data from the data receiver and the sub-cache region with the corresponding relation to obtain a target sub-cache region, wherein the kernel cache comprises a plurality of sub-cache regions;
and storing the second data in the target sub-cache region, wherein the kernel cache is used for sending the data stored in the target sub-cache region to the data receiver.
Optionally, storing the second data in the core cache includes:
and correspondingly storing a data receiver identification and the second data in a kernel cache, wherein the data receiver identification is an identification of a data receiver for receiving the second data, the data transmission information comprises the data receiver identification, and the kernel cache is used for sending the second data according to the data receiver identification.
In a third aspect, a data transmission apparatus is provided, the apparatus further including:
the device comprises an extraction module, a first processing module and a second processing module, wherein the extraction module is used for extracting first data from a kernel cache, and the first data is data to be sent to a first user thread;
the storage module is used for storing the first data into a user cache;
an indication module, configured to indicate the first user thread to extract the first data from the user cache.
Optionally, the storage module includes:
the obtaining submodule is used for obtaining a user cache corresponding to the first user thread from one or more user caches;
and the first storage submodule is used for storing the first data into a user cache corresponding to the first user thread.
Optionally, the storage module is configured to correspondingly store a thread identifier and the first data in the user cache, where the thread identifier is an identifier of the first user thread;
the indicating module is configured to indicate the first user thread, and extract the first data from the user cache according to the thread identifier.
Optionally, the apparatus further comprises:
a sending module, configured to send a user cache request to a device running the first user thread, where the user cache request is used to request allocation of a cache space;
a receiving module, configured to receive response information of the device, where the response information indicates a storage location of a cache space allocated to the first user thread;
a determining module, configured to determine a cache space indicated by the storage location as the user cache.
In a fourth aspect, there is provided a data transmission apparatus, the apparatus comprising:
the extracting module is used for extracting data transmission information from a user cache, wherein the data transmission information is used for representing data which is indicated to be sent by a second user thread, and the data transmission information is stored in the user cache by the second user thread;
the acquisition module is used for acquiring second data identified by the data transmission information from a data storage space;
and the storage module is used for storing the second data in a kernel cache, wherein the kernel cache is used for sending the second data.
Optionally, the extracting module includes:
the detection submodule is used for detecting the storage amount occupied by the data transmission information in the user cache;
and the extraction submodule is used for extracting the data transmission information from the user cache under the condition that the storage amount reaches a preset threshold value.
Optionally, the storage module includes:
the searching submodule is used for searching a sub-cache region corresponding to the data receiver of the second data from the data receiver and the sub-cache region with the corresponding relation under the condition that the data transmission information indicates the data receiver of the second data, and obtaining a target sub-cache region, wherein the inner core cache comprises a plurality of sub-cache regions;
and the storage submodule is used for storing the second data in the target sub-cache region, wherein the kernel cache is used for sending the data stored in the target sub-cache region to the data receiver.
Optionally, the storage module is configured to correspondingly store a data receiver identifier and the second data in a kernel cache, where the data receiver identifier is an identifier of a data receiver that receives the second data, the data transmission information includes the data receiver identifier, and the kernel cache is configured to send the second data according to the data receiver identifier.
In a fifth aspect, an electronic device is provided, which includes a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor adapted to perform the method steps of any of the first aspect or any of the second aspect when executing a program stored in the memory.
In a sixth aspect, a computer-readable storage medium is provided, having stored thereon a computer program which, when being executed by a processor, carries out the method steps of any of the first aspects, or any of the second aspects.
In a seventh aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the data transmission methods described above.
The embodiment of the application has the following beneficial effects:
the embodiment of the application provides a data transmission method and device, electronic equipment and a storage medium. In the application, first data can be extracted from a kernel cache, wherein the first data is data to be sent to a first user thread; storing the first data in a user cache; the first user thread is instructed to fetch the first data from the user cache.
The first data are extracted from the kernel cache when the data are transmitted, and then the first data are stored in the user cache, so that the storage space of the kernel cache can be released in real time, the storage pressure of the kernel cache is relieved, the problem that the data cannot be received or sent when a large amount of data are processed in a short time is solved, and the data transmission efficiency is improved.
Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a flowchart of a data transmission method according to an embodiment of the present application;
fig. 2 is a flowchart of another data transmission method according to an embodiment of the present application;
fig. 3 is a flowchart of another data transmission method according to an embodiment of the present application;
fig. 4 is a diagram illustrating an example of a data transmission method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a data transmission device according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of another data transmission apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a data transmission method, which can be applied to a data transmission party. The data transmitter may include a data sender and a data receiver.
Taking the example that the client installed with the application program performs data transmission with the server of the application program, the data transmitter may include the client and the server. In the case that the client sends data to the server, the data sender may be the client, and the data receiver may be the server. In the case that the server sends data to the client, the data sender may be the server, and the data receiver may be the client. It will be appreciated that a device that is a sender of data in one case may be a receiver of data in another case.
In the embodiment of the application, a data transmitter can create a plurality of threads for data transmission. The multiple threads may include a user thread and a target thread, where the user thread is used to process data required for running the application program, the user thread may generate data to be sent to a data receiver, and the user thread may perform data processing on the data sent by the data sender according to the set service logic. The target thread may be used to fetch data from the core cache or to store data in the core cache. The number of the target threads is less than that of the user threads, and in the embodiment of the present application, the number of the target threads may be 1.
The inner core cache is used for sending the data to the data receiver under the condition that the data to be sent to the data receiver is stored, and storing the data under the condition that the data sent by the data sender is received.
In the embodiment of the present application, taking the application of the method to a data receiving party as an example, a processing procedure of the data transmission method is described, as shown in fig. 1, including:
step 101, extracting first data from a kernel cache.
The first data is data to be sent to the first user thread.
In an implementation, in case of receiving first data sent by a data sender, the data receiver may store the first data in a kernel cache. A target thread in the data recipient may then fetch the first data from the kernel cache.
And 102, storing the first data into a user cache.
In an implementation, the target thread may determine a user cache in the data receiver, and then store the first data in the determined user cache.
Optionally, the data receiving side may be provided with 1 or more user caches, and the processing procedure of determining the user caches in the data receiving side by the target thread is also different according to different setting conditions of the user caches in the data receiving side. The specific processing procedure will be described in detail later.
The target thread may only store the first data in the determined user cache, and the target thread may also obtain the thread identifier of the first user thread. First data is stored in the determined user cache based on the thread identifier, and a detailed description will be given later on a specific processing procedure.
Step 103, instructing the first user thread to extract the first data from the user cache.
In implementation, the first data may carry a thread identifier, and the target thread may obtain the thread identifier carried by the first data, and use a thread corresponding to the thread identifier as the first user thread. The target thread may then send indication information to the first user thread, after which the first user thread may extract the first data from the user cache in accordance with the indication information.
The first user thread may extract the first data from the user cache according to the indication information in various manners, and in a first possible implementation manner, in a case that a plurality of user caches are provided in the data receiving side, the indication information may indicate the user cache in which the first data is stored, and thus, the first user thread may extract the first data from the user cache indicated by the indication information.
In a second possible implementation manner, the indication information may indicate a storage location of the first data in the user cache, and thus, the first user thread may extract the first data from the user cache according to the storage location indicated by the indication information.
In a third possible implementation manner, the indication information may indicate a thread identifier of the first user thread, and the first user thread may extract the first data from the user cache according to the thread identifier, where the detailed processing procedure will be described in detail later.
In the embodiment of the application, the first data is extracted from the kernel cache after the first data is received, and then the first data is stored in the user cache, so that the storage space of the memory cache can be released in real time, the storage pressure of the memory cache is relieved, the problem that the data cannot be received or sent when a large amount of data is processed in a short time is avoided, and the data transmission efficiency is improved.
Under a high concurrency scene, a large number of user threads are established between a data sender and a data receiver, and the user threads extract received data from a kernel cache or store data to be sent so as to realize the transmission of a large amount of data in a short time. Therefore, the kernel cache needs to perform data interaction with a large number of user threads in a short time, and access threads are switched frequently, so that the processing pressure of the kernel cache is large.
In the embodiment of the application, the target thread extracts the first data from the kernel cache, so that the kernel cache can be prevented from performing data interaction with a large number of user threads in a short time, thread switching is reduced, the processing pressure of the kernel cache can be reduced, and the data transmission efficiency is improved.
Before data transmission, a user cache can be applied to a device where a user thread is located, and the specific steps are as follows:
step 1, sending a user cache request to equipment running a first user thread.
Wherein the user cache request is used for requesting allocation of the cache space.
In an implementation, after the data transfer party starts running, a user cache request may be sent to the device running the first user thread.
In this embodiment of the application, when the data transmission side is a client side installed with an application program, the device running the first user thread may be the client side. In the case where the data transmission side is a server side of the application program, the device running the first user thread may be the server side. In the case that the server is a service cluster formed by a plurality of servers, the device running the first user thread may be a part of the servers or all the servers in the service cluster.
And 2, receiving response information of the equipment.
Wherein the response information indicates a storage location of the buffer space allocated for the first user thread.
In an implementation, after receiving the user cache request, the device running the first user thread may determine a part of the cache space in the local cache as the cache space allocated to the first user thread. Then, the device may generate response information indicating the storage location where the cache space exists, and transmit the response information.
Alternatively, a user thread belonging to the same application as the first user thread may use the buffer space allocated for the first user thread, i.e., multiple user threads of the application may use the buffer space in common. Thus, the response information may indicate a storage location of the cache space allocated for the application to which the first user thread belongs.
Optionally, in a case where multiple applications run in a device running the first user thread, the device may allocate the same cache space for the multiple applications, and the user threads of the multiple applications share the cache space. Alternatively, the device running the first user thread may allocate a buffer space for each application separately, and the user thread of each application may use the buffer space corresponding to the application.
And 3, determining the cache space indicated by the storage position as a user cache.
In this embodiment of the present application, a user cache request may be sent to a device running a first user thread, and then response information indicating a storage location having an allocated cache space may be received. The cache space indicated by the storage location may then be determined as the user cache. Therefore, the cache mechanism of the user cache is expanded on the basis of the kernel cache, so that the data transmission information of the first data is stored and extracted in the user cache in the case of sending the first data, and the second data is stored and extracted in the user cache in the case of receiving the second data.
The processing procedure of the application for the user cache may be implemented by a target thread in the data transmission party, or may be implemented by a user thread in the data transmission party, and the embodiment of the application is not particularly limited.
Optionally, a user cache may be set in the data receiving side, and the user threads of all the application programs in the data receiving side may share and use the user cache. Or, a plurality of user caches may be set in the data receiver, and the user caches are respectively set for a plurality of application programs in the data receiver. When the user caches are respectively set for a plurality of applications, a separate user cache may be set for each application, and thus, user threads belonging to the same application may use the same user cache. Alternatively, the same user cache may be set for a part of the applications, and user threads belonging to a part of the applications may share the user cache.
According to different setting conditions of a user cache in a data receiving party, an embodiment of the present application provides an implementation manner for storing first data in the user cache, including:
step 1, obtaining a user cache corresponding to a first user thread from one or more user caches.
In implementation, for a case that one user cache is set in the data receiving side, after the first data is extracted from the kernel cache, the target thread may directly use the user cache as a user cache corresponding to the first user thread.
For the condition that a plurality of user caches are arranged in the data receiving party, after the first data are extracted from the kernel cache, the target thread can obtain the thread identification carried by the first data, and the user thread corresponding to the thread identification is used as the first user thread. The target thread may then determine the application to which the first user thread belongs, resulting in a target application. Then, the target thread may search a user cache corresponding to the target application program in the application program and the user cache having the corresponding relationship, and use the searched user cache as the user cache corresponding to the first user thread.
And 2, storing the first data into a user cache corresponding to the first user thread.
In the embodiment of the application, the target thread can directly store the first data in the user cache under the condition that the user cache is arranged in the data receiving party, so that the data storage efficiency of the data receiving party can be improved.
For a case where a plurality of user caches are set in the data receiving side, the target thread may store the first data in the user cache corresponding to the target application program. Therefore, the first data can be stored in the user cache of the corresponding application program, the first user thread can conveniently search the first data in the user cache, and the data extraction efficiency of the data receiving party can be improved.
Optionally, a thread identifier of the first user thread may be obtained, and the storing and extracting of the first data is implemented in the user cache based on the thread identifier, where the processing procedure includes:
step one, extracting first data from a kernel cache.
In the implementation, the processing procedure of this step may refer to the processing procedure of step 101, and is not described herein again.
And step two, correspondingly storing the thread identification and the first data in the user cache.
In implementation, the first data may carry a thread identifier, and the target thread may obtain the thread identifier carried by the first data to obtain the thread identifier of the first user thread. The target thread may then store the thread identification and the first data of the first user thread in the user cache, respectively.
And step three, indicating a first user thread, and extracting first data from the user cache according to the thread identifier.
In an implementation, the target thread may send indication information indicating that there is a thread identifier of the first user thread, and then the first user thread may extract data corresponding to the thread identifier of the first user thread from the user cache to obtain the first data.
In the embodiment of the application, the thread identifier and the first data are correspondingly stored in the user cache, so that the first data can be conveniently extracted by the subsequent first user thread in the user cache according to the thread identifier, and the data extraction efficiency can be improved. Furthermore, the first user thread is ensured to extract the first data from the user cache in time, and the storage space of the user cache is released, so that the user cache can store more data extracted from the kernel cache, and the data transmission efficiency is further improved.
The embodiment of the present application further provides a data transmission method, which may be applied to a data sender, as shown in fig. 2, and includes the following specific steps:
step 201, data transmission information is extracted from a user cache.
The data transmission information is used to indicate data that the second subscriber line indicates to transmit, and for convenience of distinction, the data that the second subscriber line indicates to transmit is referred to as second data.
In implementation, in the case that the second user thread in the data transmitting side has the second data to be transmitted to the data receiving side, the second user thread may generate data transmission information of the second data. The second user thread may then store the data transmission information in a user cache corresponding to the second user thread. The target thread may then extract the data transfer information from the user cache.
In the embodiment of the present application, the data transmission information may include a storage location of the second data in the data storage space, and/or a data identifier of the second data. The data transmission information may also include information or information that can identify the second user thread, information that can identify the recipient of the data that receives the second data, and so on.
For example, the data transfer information may include a thread number of the second user thread, a data cache pointer of the second data, a data length of the second data, and a connection handle indicating a data recipient. The thread number is used for identifying a second user thread, the data cache pointer and the data length are used for indicating the storage position of second data in the data storage space, and the connection handle is used for indicating a data receiving party of the second data.
Step 202, second data identified by the data transmission information is obtained from the data storage space.
In an implementation, in the case that the data transmission information includes a storage location of the second data in the data storage space, the target thread may read the second data from the data storage space according to the storage location included in the data transmission information.
In the case that the data transmission information includes the data identifier of the second data, the target thread may read the second data from the data storage space according to the data identifier.
And step 203, storing the second data in the kernel cache.
The kernel cache is used for sending the second data.
The kernel has the kernel characteristic of sending the data to be sent to the data receiver under the condition that the data to be sent is stored in the kernel cache, so that the second data can be sent to the data receiver of the second data by storing the second data to be sent in the kernel cache by utilizing the kernel characteristic.
In the embodiment of the application, when sending the second data, the second user thread stores the data transmission information of the second data into the user cache, and the target thread acquires the second data according to the data transmission information extracted from the user cache and stores the second data into the kernel cache. Therefore, under a high concurrency scene, a large amount of second data to be sent can be prevented from being directly stored in the kernel cache, so that the storage pressure of the kernel cache can be reduced, and the data transmission efficiency is improved.
Furthermore, compared with the case that a large number of user threads directly store second data in the kernel cache, the target thread stores the second data in the kernel cache, so that the kernel cache can be prevented from performing data interaction with a large number of user threads in a short time, the number of thread switching is reduced, the processing pressure of the kernel cache can be reduced, and the data transmission efficiency is improved.
In addition, because the data transmission information capable of indicating the second data is stored in the user cache, the target thread acquires the second data according to the data transmission information and stores the second data in the kernel cache, the second data is stored in the kernel cache only once in the sending process of the second data. Compared with the method that the second data are stored in the user cache, the second data are subsequently extracted from the user cache and then stored in the kernel cache, the repeated storage of the second data is avoided. On one hand, the storage space of the user cache can be saved, on the other hand, the second data with larger capacity can be prevented from being stored and extracted in the user cache, and the processing resource consumed by the data transmission party for reading and writing the data is saved.
Optionally, the target thread may extract the data transfer information from the user cache in a variety of ways. In a first possible implementation manner, the target thread may detect whether the user cache stores data transmission information, and in a case that the user cache stores data transmission information, the target thread may extract the data transmission information from the user cache. In the case where it is not detected that the user cache stores data transmission information, the target thread may not perform subsequent processing.
In a second possible implementation manner, the second user thread may send the indication information to the target thread after storing the data transmission information in the user cache. The target thread may extract the data transfer information from the user cache upon receiving the indication information.
In a third possible implementation manner, the target thread may extract the data transmission information according to the storage amount occupied by the data transmission information in the user cache, as shown in fig. 3, where the method includes:
step 301, detecting the storage occupied by the data transmission information in the user cache.
In an implementation, the target thread may detect the amount of storage occupied by the data transfer information in the user cache. The target thread may then compare the detected amount of storage to a preset threshold. And under the condition that the storage amount does not reach the preset threshold value, the target thread does not perform subsequent processing.
Optionally, the preset threshold may be set according to the data amount of data transmission in the actual data transmission process. For example, the preset threshold may be 1 Kb.
Step 302, extracting data transmission information from the user cache when the storage amount reaches a preset threshold value.
In the embodiment of the application, under the condition that the storage amount of the data transmission information reaches the preset threshold, the target thread may extract a plurality of data transmission information from the user cache, and then obtain a plurality of second data according to the plurality of data transmission information. Therefore, the target thread can store a plurality of second data in the kernel storage in a batch mode, so that the second data can be sent in a batch mode, and the phenomenon of packet sending of the data is prevented.
Optionally, the data transmission information may also be used to indicate a data receiver of the second data, and the target thread may indicate, based on the data transmission information, the data receiver of the second data cached by the kernel while storing the second data in the kernel cache, so that the kernel cache sends the second data to the indicated data receiver. The embodiment of the present application provides two implementation manners of a data receiving party that instructs a kernel to cache second data, including:
in a first mode, the core cache may include a plurality of sub-cache regions, each sub-cache region corresponds to one data receiver, and the core cache may be configured to send data stored in each sub-cache region to the data receiver corresponding to the sub-cache region. In this case, the process of instructing the data receiver of the kernel to cache the second data may include:
step one, searching a sub-cache region corresponding to the data receiver of the second data from the data receiver and the sub-cache region with the corresponding relation to obtain a target sub-cache region.
In an implementation, after extracting the data transfer information, the target thread may determine a data recipient indicated by the data transfer information. Then, the target thread may search a sub-cache region corresponding to the determined data receiver from the data receiver and the sub-cache region having the corresponding relationship, so as to obtain a target sub-cache region.
And step two, storing second data in the target sub-cache region.
In this embodiment, the target thread may search a target sub-cache region corresponding to the data receiver of the second data from the data receiver and the sub-cache region having the corresponding relationship, and then store the second data in the target sub-cache region. Therefore, when the second data is sent, the data sender can conveniently and quickly determine the data receiver of the second data, and the data transmission efficiency is improved.
In a second mode, the data transmission information may include a data receiver identifier, and the kernel cache may be configured to send the second data according to the data receiver identifier. Wherein the data receiver identification is an identification of a data receiver of the second data. In this case, the process of instructing the data receiver of the kernel to cache the second data may include: and correspondingly storing the data receiver identification and the second data in the kernel cache. Therefore, when the second data is sent, the second data can be sent to the data receiving party indicated by the data receiving party identification, so that the data sending party can conveniently and quickly determine the data receiving party of the second data, and the data transmission efficiency is improved.
As shown in fig. 4, which is an exemplary diagram of a data transmission method provided in this embodiment of the present application, where both the device 100 and the device 200 are data transmission parties, the device 100 may be a client installed with an application, and the device 200 may be a server of the application. After the data transmission party starts to operate, the user cache can be applied for the application program, so that the data transmission party can realize data transmission based on the user cache and the kernel cache.
When the client sends data to the server, as shown by a solid arrow in the device 100, when the user thread in the client has second data to be sent, the user thread may generate data transmission information of the second data, and then write the data transmission information into the user cache. Then, by calling the time waiting model, the user thread can enter a waiting state to wait for response data sent by the server. The data transmission information may include a thread number of the user thread, a data cache pointer of the second data, a data length of the second data, and a connection handle indicating a data receiving party.
The target thread can detect the storage amount occupied by the data transmission information in the user cache, when the storage amount reaches a preset threshold value 1Kb, the data transmission information is extracted from the user cache, second data is obtained from the data storage space according to the data transmission information, and then the obtained second data is stored in the kernel cache. Then, the second data stored in the core cache can be sent to the server by using the core characteristics, so that data transmission is realized.
As shown by the dotted arrow in the device 200, after receiving the second data sent by the client, the server may store the second data as the received first data in the kernel cache. The target thread in the server can extract the first data from the kernel cache and write the first data into the user cache. Therefore, the first data are extracted from the kernel cache in time and stored in the user cache, so that the probability of the kernel cache becoming full can be reduced, and the data receiving efficiency is improved.
Then, the user thread in the server may extract the first data from the user cache, and generate response data of the first data. Then, the server may send the response data to the client, where the sending process is shown by a solid arrow in the device 200, and the specific sending process is similar to the sending process where the client sends the second data to the server, and is not described here again.
After receiving the response data sent by the server, the client can store the response data in the kernel cache, and then the target thread can extract the response data from the kernel cache and store the response data in the user cache. And then, the target thread can activate the first user thread which enters the waiting state in an event-driven mode, the first user thread extracts response data from the user cache, and the response data is subjected to data processing according to the service logic of the first user thread.
Thus, TCP (Transmission Control Protocol) communication between the client and the server can be realized.
Based on the same technical concept, an embodiment of the present application further provides a data transmission apparatus, as shown in fig. 5, the apparatus includes:
an extracting module 510, configured to extract first data from a kernel cache, where the first data is data to be sent to a first user thread;
a storage module 520, configured to store the first data in a user cache;
an indicating module 530, configured to instruct the first user thread to extract the first data from the user cache.
Optionally, the storage module includes:
the obtaining submodule is used for obtaining a user cache corresponding to the first user thread from one or more user caches;
and the first storage submodule is used for storing the first data into a user cache corresponding to the first user thread.
Optionally, the storage module is configured to correspondingly store a thread identifier and the first data in the user cache, where the thread identifier is an identifier of the first user thread;
the indicating module is configured to indicate the first user thread, and extract the first data from the user cache according to the thread identifier.
Optionally, the apparatus further comprises:
a sending module, configured to send a user cache request to a device running the first user thread, where the user cache request is used to request allocation of a cache space;
a receiving module, configured to receive response information of the device, where the response information indicates a storage location of a cache space allocated to the first user thread;
a determining module, configured to determine a cache space indicated by the storage location as the user cache.
In the embodiment of the application, the first data is extracted from the kernel cache after the first data is received, and then the first data is stored in the user cache, so that the storage space of the memory cache can be released in real time, the storage pressure of the memory cache is relieved, the problem that the data cannot be received or sent when a large amount of data is processed in a short time is avoided, and the data transmission efficiency is improved.
Based on the same technical concept, an embodiment of the present application further provides a data transmission apparatus, as shown in fig. 6, the apparatus includes:
an extracting module 610, configured to extract data transmission information from a user buffer, where the data transmission information is used to represent data indicated to be sent by a second user thread, and the data transmission information is stored in the user buffer by the second user thread;
an obtaining module 620, configured to obtain, from a data storage space, second data identified by the data transmission information;
a storage module 630, configured to store the second data in a kernel cache, where the kernel cache is used to send the second data.
Optionally, the extracting module includes:
the detection submodule is used for detecting the storage amount occupied by the data transmission information in the user cache;
and the extraction submodule is used for extracting the data transmission information from the user cache under the condition that the storage amount reaches a preset threshold value.
Optionally, the storage module includes:
the searching submodule is used for searching a sub-cache region corresponding to the data receiver of the second data from the data receiver and the sub-cache region with the corresponding relation under the condition that the data transmission information indicates the data receiver of the second data, so as to obtain a target sub-cache region, and the kernel cache comprises a plurality of sub-cache regions;
and the storage submodule is used for storing the second data in the target sub-cache region, wherein the kernel cache is used for sending the data stored in the target sub-cache region to the data receiver.
Optionally, the storage module is configured to correspondingly store a data receiver identifier and the second data in a kernel cache, where the data receiver identifier is an identifier of a data receiver that receives the second data, the data transmission information includes the data receiver identifier, and the kernel cache is configured to send the second data according to the data receiver identifier.
In the embodiment of the application, when sending the second data, the second user thread stores the data transmission information of the second data into the user cache, and the target thread acquires the second data according to the data transmission information extracted from the user cache and stores the second data into the kernel cache. Therefore, under a high concurrency scene, a large amount of second data to be sent can be prevented from being directly stored in the kernel cache, so that the storage pressure of the kernel cache can be reduced, and the data transmission efficiency is improved.
Based on the same technical concept, the embodiment of the present application further provides an electronic device, as shown in fig. 7, including a processor 701, a communication interface 702, a memory 703 and a communication bus 704, where the processor 701, the communication interface 702, and the memory 703 complete mutual communication through the communication bus 704,
a memory 703 for storing a computer program;
the processor 701 is configured to implement the steps of any of the data transmission methods described above when executing the program stored in the memory 703.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In yet another embodiment provided by the present application, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any of the above data transmission methods.
In yet another embodiment provided by the present application, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the data transmission methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

1. A method of data transmission, the method further comprising:
extracting first data from a kernel cache, wherein the first data is data to be sent to a first user thread;
storing the first data in a user cache;
instructing the first user thread to extract the first data from the user cache.
2. The method of claim 1, wherein storing the first data in a user cache comprises:
obtaining a user cache corresponding to the first user thread from one or more user caches;
and storing the first data into a user cache corresponding to the first user thread.
3. The method of claim 1, wherein storing the first data in a user cache comprises:
correspondingly storing a thread identifier and the first data in the user cache, wherein the thread identifier is an identifier of the first user thread;
the instructing the first user thread to extract the first data from the user cache comprises:
and indicating the first user thread, and extracting the first data from the user cache according to the thread identification.
4. The method of claim 1, wherein prior to storing the first data in a user cache, the method further comprises:
sending a user cache request to the equipment running with the first user thread, wherein the user cache request is used for requesting to allocate a cache space;
receiving response information of the device, wherein the response information indicates a storage position of a cache space allocated to the first user thread;
and determining the cache space indicated by the storage position as the user cache.
5. A method of data transmission, the method comprising:
extracting data transmission information from a user cache, wherein the data transmission information is used for representing data which is indicated to be sent by a second user thread, and the data transmission information is stored in the user cache by the second user thread;
acquiring second data identified by the data transmission information from a data storage space;
and storing the second data in a core cache, wherein the core cache is used for sending the second data.
6. The method of claim 5, wherein extracting data transmission information from the user buffer comprises:
detecting the storage amount occupied by the data transmission information in the user cache;
and under the condition that the storage amount reaches a preset threshold value, extracting the data transmission information from the user cache.
7. The method of claim 5, wherein storing the second data in a kernel cache comprises:
under the condition that the data transmission information indicates a data receiver of the second data, searching a sub-cache region corresponding to the data receiver of the second data from the data receiver and the sub-cache region with the corresponding relation to obtain a target sub-cache region, wherein the kernel cache comprises a plurality of sub-cache regions;
and storing the second data in the target sub-cache region, wherein the kernel cache is used for sending the data stored in the target sub-cache region to the data receiver.
8. The method of claim 5, wherein storing the second data in a kernel cache comprises:
and correspondingly storing a data receiver identification and the second data in a kernel cache, wherein the data receiver identification is an identification of a data receiver for receiving the second data, the data transmission information comprises the data receiver identification, and the kernel cache is used for sending the second data according to the data receiver identification.
9. A data transmission apparatus, characterized in that the apparatus further comprises:
the device comprises an extraction module, a first processing module and a second processing module, wherein the extraction module is used for extracting first data from a kernel cache, and the first data is data to be sent to a first user thread;
the storage module is used for storing the first data into a user cache;
an indication module, configured to indicate the first user thread to extract the first data from the user cache.
10. A data transmission apparatus, characterized in that the apparatus comprises:
the extracting module is used for extracting data transmission information from a user cache, wherein the data transmission information is used for representing data which is indicated to be sent by a second user thread, and the data transmission information is stored in the user cache by the second user thread;
the acquisition module is used for acquiring second data identified by the data transmission information from a data storage space;
and the storage module is used for storing the second data in a kernel cache, wherein the kernel cache is used for sending the second data.
11. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1 to 4, or claims 5 to 8, when executing a program stored in a memory.
12. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method steps of claims 1 to 4 or, respectively, of claims 5 to 8.
CN202011132819.5A 2020-10-21 2020-10-21 Data transmission method and device, electronic equipment and storage medium Pending CN114390098A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011132819.5A CN114390098A (en) 2020-10-21 2020-10-21 Data transmission method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011132819.5A CN114390098A (en) 2020-10-21 2020-10-21 Data transmission method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114390098A true CN114390098A (en) 2022-04-22

Family

ID=81193681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011132819.5A Pending CN114390098A (en) 2020-10-21 2020-10-21 Data transmission method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114390098A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010051927A1 (en) * 2000-06-08 2001-12-13 Blinkspeed, Inc. Increasing web page browsing efficiency by periodically physically distributing memory media on which web page data are cached
CN101266561A (en) * 2008-04-29 2008-09-17 中兴通讯股份有限公司 Inter-core message communication method for multi-core multithread processor
CN101770412A (en) * 2010-01-22 2010-07-07 华中科技大学 Continuous data caching system and data caching method thereof
CN103150268A (en) * 2013-03-04 2013-06-12 浪潮电子信息产业股份有限公司 Block-level data capture method in CDP (Continuous Data Protection)
CN105897849A (en) * 2015-12-22 2016-08-24 乐视云计算有限公司 Cross-process service method and system and proxy server
CN107071059A (en) * 2017-05-25 2017-08-18 腾讯科技(深圳)有限公司 Distributed caching service implementing method, device, terminal, server and system
WO2018077292A1 (en) * 2016-10-28 2018-05-03 北京市商汤科技开发有限公司 Data processing method and system, electronic device
CN110311975A (en) * 2019-06-28 2019-10-08 北京奇艺世纪科技有限公司 A kind of data request processing method and device
CN110955584A (en) * 2018-09-26 2020-04-03 Oppo广东移动通信有限公司 Block device access tracking method and device, storage medium and terminal
CN111209123A (en) * 2019-12-26 2020-05-29 天津中科曙光存储科技有限公司 Local storage IO protocol stack data interaction method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010051927A1 (en) * 2000-06-08 2001-12-13 Blinkspeed, Inc. Increasing web page browsing efficiency by periodically physically distributing memory media on which web page data are cached
CN101266561A (en) * 2008-04-29 2008-09-17 中兴通讯股份有限公司 Inter-core message communication method for multi-core multithread processor
CN101770412A (en) * 2010-01-22 2010-07-07 华中科技大学 Continuous data caching system and data caching method thereof
CN103150268A (en) * 2013-03-04 2013-06-12 浪潮电子信息产业股份有限公司 Block-level data capture method in CDP (Continuous Data Protection)
CN105897849A (en) * 2015-12-22 2016-08-24 乐视云计算有限公司 Cross-process service method and system and proxy server
WO2018077292A1 (en) * 2016-10-28 2018-05-03 北京市商汤科技开发有限公司 Data processing method and system, electronic device
CN107071059A (en) * 2017-05-25 2017-08-18 腾讯科技(深圳)有限公司 Distributed caching service implementing method, device, terminal, server and system
CN110955584A (en) * 2018-09-26 2020-04-03 Oppo广东移动通信有限公司 Block device access tracking method and device, storage medium and terminal
CN110311975A (en) * 2019-06-28 2019-10-08 北京奇艺世纪科技有限公司 A kind of data request processing method and device
CN111209123A (en) * 2019-12-26 2020-05-29 天津中科曙光存储科技有限公司 Local storage IO protocol stack data interaction method and device

Similar Documents

Publication Publication Date Title
CN106302595B (en) Method and equipment for carrying out health check on server
CN108647240B (en) Method and device for counting access amount, electronic equipment and storage medium
CN110162270B (en) Data storage method, storage node and medium based on distributed storage system
CN108153783B (en) Data caching method and device
US10057368B1 (en) Method and system for incremental cache lookup and insertion
CN110677684B (en) Video processing method, video access method, distributed storage method and distributed video access system
CN109246234B (en) Image file downloading method and device, electronic equipment and storage medium
CN110995817B (en) Request callback method and device and client equipment
CN111339137A (en) Data verification method and device
CN111382206B (en) Data storage method and device
CN105988941B (en) Cache data processing method and device
CN114090623A (en) Method and device for creating cache resources, electronic equipment and storage medium
CN113656178B (en) Data processing method, device, equipment and readable storage medium
CN114676074A (en) Access request processing method and device, electronic equipment and storage medium
CN112214178B (en) Storage system, data reading method and data writing method
CN110083482B (en) Method and device for performing erasure code processing on file storage system and electronic equipment
CN112653736A (en) Parallel source returning method and device and electronic equipment
CN109086220B (en) Method and device for recycling storage space
CN106911733B (en) Cloud proxy website access method and device
CN114390098A (en) Data transmission method and device, electronic equipment and storage medium
CN113849125B (en) CDN server disk reading method, device and system
CN109446462B (en) Page-based data monitoring processing method, device, equipment and storage medium
CN108804195B (en) Page display method and device, server and client
CN107783911B (en) Data filtering method and terminal equipment
CN112256654A (en) Document sharing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination