CN108631947B - RDMA (remote direct memory Access) network data transmission method based on erasure codes - Google Patents
RDMA (remote direct memory Access) network data transmission method based on erasure codes Download PDFInfo
- Publication number
- CN108631947B CN108631947B CN201810487054.3A CN201810487054A CN108631947B CN 108631947 B CN108631947 B CN 108631947B CN 201810487054 A CN201810487054 A CN 201810487054A CN 108631947 B CN108631947 B CN 108631947B
- Authority
- CN
- China
- Prior art keywords
- data
- frame
- receiving end
- blocks
- block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/004—Arrangements for detecting or preventing errors in the information received by using forward error control
- H04L1/0056—Systems characterized by the type of code used
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/0001—Systems modifying transmission characteristics according to link quality, e.g. power backoff
- H04L1/0006—Systems modifying transmission characteristics according to link quality, e.g. power backoff by adapting the transmission format
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
- H04L1/004—Arrangements for detecting or preventing errors in the information received by using forward error control
- H04L1/0056—Systems characterized by the type of code used
- H04L1/0061—Error detection codes
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Quality & Reliability (AREA)
- Communication Control (AREA)
Abstract
The invention discloses an erasure code-based RDMA network data transmission method, which comprises the following steps: (1) the method comprises the steps that a sending end obtains applied data, divides the data into data frames with fixed sizes, divides each frame of data into k data blocks with the same size, and sends the data blocks to a receiving end; (2) if the data block does not reach the receiving end within the specified time threshold, the transmitting end encodes the k data blocks to generate m check blocks and transmits the check blocks to the receiving end; (3) the receiving end can recover a frame of data after receiving any k of the k + m blocks, and one data transmission is completed after receiving all the data frames. The invention generates redundant data through erasure codes to enable a receiving end not to wait for a data packet on a delay path, thereby reducing transmission delay; by erasure code hardware offloading, the computational overhead of the CPU is not increased.
Description
Technical Field
The invention belongs to the technical field of networks, and particularly relates to an erasure code-based RDMA network data transmission method.
Background
In the face of high-concurrency low-delay applications such as cloud computing, big data, artificial intelligence computing and the like, the traditional TCP/IP software and hardware architecture cannot meet the requirements. Due to multiple memory copy and interrupt processing of a TCP/IP software and hardware architecture, context switch, complex TCP/IP protocol processing, store-and-forward mode, and packet loss, excessive transmission delay and additional CPU computation overhead are caused.
Remote Direct Memory Access (RDMA) is a complementary technology to TCP/IP and provides a messaging service that directly accesses virtual Memory on a Remote machine. Since the RDMA network card can directly copy data, a complex network protocol stack can be bypassed and operating system involvement is minimized. This design achieves latency for data transfer, high throughput, and reduced CPU overhead.
Modern data centers typically consist of hundreds of servers, requiring bi-directional bandwidth on the TB level internally to transmit data. To meet such network demands and reduce overhead, most data centers employ networks of Clos topology (typically three or more layers) that horizontally expand network capacity. These networks use switches to replace routers with a large number of network ports. FIG. 1 is a topology of a typical Clos-structured data center network.
Data center servers are running a large number of cloud services. These services create a wide variety of traffic patterns. The different data flows have different requirements on network performance, wherein the large flow has large data volume and high requirement on bandwidth; the data volume of small stream transmission is small but generally carries control information, and the requirement on transmission delay is high. When large and small flows are transmitted in a mixed manner in a network, the small flows may cause a high delay due to scheduling for transmission after the large flows. If fine-grained transmission is adopted, the large stream is divided into small streams with fixed sizes for transmission, the problems can be avoided, and the large stream can be transmitted in parallel through multiple paths of an RDMA (remote direct memory access) network in a data center after being divided, so that the transmission delay of data is reduced. But at the same time this also presents problems: the completion time of a data stream is determined by the last arriving data block, which can generate long tail effect; and the receiver needs to implement a complex sorting algorithm to rearrange the original data.
Disclosure of Invention
The purpose of the invention is as follows: based on the defects of the prior art, the invention provides an erasure code-based RDMA network data transmission method, which reduces transmission delay through fine-grained transmission and multi-path parallel transmission, eliminates the long tail effect of network transmission by using the erasure code, simplifies the reordering operation of a receiving end, and reduces the calculation expense of a CPU (Central processing Unit) by using the erasure code hardware unloading calculation of the RDMA network card.
The technical scheme is as follows: the invention relates to an erasure code-based RDMA network data transmission method, which comprises the following steps:
s1, the sending end acquires the applied data, divides the data into data frames with fixed size, divides each frame of data into k data blocks with the same size, and sends the data blocks to the receiving end;
s2, if the data block does not reach the receiving end within the appointed time threshold, the sending end codes the k data blocks to generate m check blocks, and the check blocks are sent to the receiving end;
s3, when the receiving end receives any k data blocks of the k + m data blocks, a frame of data is recovered, and when the receiving end receives all data frames, a data transmission is completed.
Further, the step S1 includes: the sending end copies the data in the application memory to the buffer zone one by one according to the frame size, and allocates a frame number for the frame data, and the frame number of each frame data sent is increased by one. A frame of data is sliced into k data blocks, each of which has a size of block _ size bytes. These data blocks are then sent in parallel to the receiving end with k QPs. This data frame is then added to the timer's sequence of events.
The step S2 includes: the sending end sets a timer for each frame of data, calculates erasure codes for the data frames after timeout, and sends the check blocks to the receiving end. The timer module checks whether each sent data frame is overtime, if the data frame does not successfully reach the receiving end in k data frames after the overtime, delay may occur, at this time, the data frame is encoded through the erasure code module to generate m check blocks, and then the data transmission module is called to send the data frame to the receiving end by using another m QPs.
The step S3 includes: and the receiving end decodes the data frame after receiving the data block. After receiving a data block, the receiving end identifies which data frame it belongs to through the immediate number of Write with update operation of RDMA, knows which data block or check block in the data frame according to the QP of the received data block, obtains the original content of the frame data through decoding of the check code when the total number of the collected data blocks and check blocks of a data frame reaches k, and finally adds the frame data into the queue of the received data to wait for the reading of an application program.
Further, the receiving end sequences all the received data frames into original data to complete the receiving. The receiving end reads the received data frames one by one according to the frame number, the receiving end records the frame number of the data frame to be read next by using a variable read _ pos, the read data is divided into a blocking type and a non-blocking type, the application program waits until the data frame with the frame number of read _ pos appears in the received data queue under the blocking type, if no data frame with the frame number of read _ pos exists in the received data queue under the non-blocking mode, the data frame is directly returned, after the application program reads one frame of data from the receiving buffer area of the system, the read _ pos is added, and then the next frame of data is read until the buffer area of the user is full or the last frame of data of one data transmission is encountered.
Has the advantages that: compared with the prior art, the invention has the following advantages:
1. the invention uses erasure codes to divide data into data frames with the same size, then divides each frame of data into k data blocks with the same size, sends the data blocks to a receiving end, calculates the erasure codes for the k data blocks, generates m verification blocks, and then sends the k data blocks to the receiving end. The receiving end only needs to receive k of the k + m blocks to recover the data, and does not need to wait for all the data blocks, thereby reducing the transmission delay through data redundancy.
2. In order to reduce the calculation expense of the CPU of the code, the invention uses the hardware unloading function of the erasure code of the RDMA network card, calculates the erasure code by using hardware, and does not increase the calculation expense of the CPU when the erasure code is used.
3. The invention uses the system code coding scheme, the system code comprises the original data information, when k data blocks reach the receiving end before the check block, the original data can be directly obtained, the decoding calculation is not needed, and the decoding expense is reduced.
4. The system of the invention uses the strategy of delay coding, the delay and the loss of data in the network are relatively less, and by setting a time threshold, only when the data block does not successfully reach the receiving end after the time-out, the check block is calculated and sent, thereby further reducing the coding cost.
5. The invention provides a reordering algorithm based on erasure codes. When a sending end fragments data, calculates erasure codes and transmits data blocks and check blocks in parallel through multiple paths, a receiving end can simplify a reordering algorithm by utilizing the characteristic because the condition of data loss or indefinite waiting does not occur.
Drawings
FIG. 1 is a network topology diagram of a data center using a Clos architecture;
fig. 2 is a diagram of a transmission system structure according to an embodiment of the present invention;
FIG. 3 is a diagram of a data frame structure for network transmission according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an asynchronous model of RDMA erasure code computation hardware offload.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings.
FIG. 1 is a topology of a typical Clos-structured data center network. The switches (ToR) on the Rack are connected to the servers on the Rack (Rack) through d0 links of 10G and then connected to the aggregation switch (Agg) through d1 links of 40G. Each aggregation switch connects d2 switches on the shelf (tors) and d3 master switches (Spine). Therefore, there are d1 identical paths between hosts in the same aggregation area (pod), and d1 × d3 identical paths between different areas across the master switch. For example, when d1 is d3 is 8, there are 64 such paths. The parallel paths are fully utilized to segment the data stream into fine-grained data blocks for parallel transmission, so that the data transmission between the nodes can be effectively accelerated, and the network delay of the data transmission is reduced.
The data transmission method mainly comprises the following steps: (1) the method comprises the steps that a sending end obtains applied data, divides the data into data frames with fixed sizes, divides each frame of data into k data blocks with the same size, and sends the data blocks to a receiving end; (2) if the data block does not reach the receiving end within the specified time threshold, the transmitting end encodes the k data blocks to generate m check blocks and transmits the check blocks to the receiving end; (3) the receiving end can recover a frame of data after receiving any k of the k + m blocks, and one data transmission is completed after receiving all the data frames.
In one embodiment, the data transmission system architecture is shown in FIG. 2. The system is a host side solution and is a data transfer middleware at the user layer, so it is independent of the transport layer and transparent to the network stack at the host side. At the sending end, the application gives the memory address and length of the data to be sent, and the network address at the receiving end, the system intercepts the data of the application, copies the data into a buffer managed by the system, then divides the data into data blocks with fixed size (for example, 64KB), and encodes the data blocks. And finally, the data block and the check block are sent to a receiving end, and the receiving end is responsible for decoding and submitting to the application of the receiving end. At the receiving end, the system receives the data block, decodes the original data at the same time, and copies the original data block to the applied memory area according to the original sequence after reordering. Because each frame of data can be decoded, the data blocks within the frame do not need to be sorted, and because the completion time of each frame is controlled within a threshold, the data simply needs to be waited for by the sequence number, and no special sorting is needed. The system comprises five main modules: the device comprises a buffer area, a data transmission module, an erasure code module, a timer module and an event processing module. Their functions are respectively:
(1) the buffer area comprises a sending buffer area and a receiving buffer area and is used for temporarily storing data to be sent and received data. The buffer is needed because a temporary memory area is needed when the erasure code is calculated, and the buffer is needed for data transmission to synchronize. When sending data, firstly copying the data to a buffer area according to a frame with a fixed size, then sending the data to a receiving end, if delay occurs, coding the data, and then sending a check block to the receiving end. The receiving end decodes a frame of data after receiving enough data, and waits for the application program to read. Both parties need to synchronize the written position when sending, because the data which is not yet read by the receiving end cannot be covered.
(2) The data transmission module is responsible for receiving and transmitting data. And a data sending process: the data to be transmitted is obtained from the application and one frame of data (one frame of data equal to the size of the data block multiplied by the number of data blocks at the time of encoding) is copied from the beginning each time into the buffer of the system. Then, the frame data is divided into k data blocks, the divided data blocks are written into a buffer of a receiving end through different QPs (queue Pairs) by using the Write operation of RDMA, and finally, the information of the frame is delivered to a timer module to wait for encoding. This process is repeated until all data transmissions are completed. And a data receiving process: the data of the frame which has received enough data blocks is decoded and then copied into the memory of the application, and the data is returned when the application buffer is full or the end of one data is reached.
(3) And the erasure code module is responsible for invoking erasure code hardware unloading of the network card to encode or decode the original data according to the received data block.
(4) The timer module counts the arrival information of the data blocks which are already sent in each frame of data, the calculation is started when the data blocks are sent, when the time threshold value is exceeded, the timer calls the coding and decoding module to code the data, and then the check block obtained through coding is sent to the receiving end.
(5) The event processing module mainly processes two types of events: the first is the event that the data is successfully sent, and the second is the event that new data is received. When an event that data transmission is successful is received, the event processing module updates the statistical information of the success of the data block in each frame stored by the timer, the timer module determines whether coding is needed according to the statistical information, and then the data transmission module transmits the check block to the receiving end. When receiving an event of receiving new data, if the event is the first data block/check block in a data frame, a new decoding context is created, otherwise, the decoding context information of the corresponding data frame is updated. When enough data blocks/check blocks are received, the data transmission module is notified that the frame data has been successfully received.
The transmission data of the present invention is transmitted in units of data blocks in a data frame as a group, and the structure of the data frame is shown in fig. 3. Each connection consists of k + m virtual connections (i.e., QPs), and it can be seen from the above that each frame data has k data blocks and m check blocks, which total k + m blocks. The ith block of each data frame is sent to the position of the ith block of the corresponding frame in the memory of the receiving end through the ith QP, so that the data block is not required to be labeled, and the block of the data can be directly judged from the received QP. Since zero padding is needed for data of less than one frame during encoding, all the data in a frame are not necessarily valid data, so the beginning of each frame uses the lower 24 bits of a 32-bit data to store the size of the valid data stored in the frame, and the value range is 0 to (block _ size _ k-4), the upper 8 bits are used to store the type of the frame (the position of the frame in an original data), and it can be:
(1) FULL: this frame is a complete data;
(2) BEGIN: this is the first frame of data;
(3) and (4) MID: this is the frame in the middle of one data;
(4) END: this is a frame of the end of data.
The invention reduces the overhead of erasure code calculation by using the hardware unloading function of the erasure code calculation of the RDMA network card. An erasure code computation hardware offloaded coding model is shown in fig. 4. Reed-solomon (rs) encoding is used. RS encoding is an encoding algorithm based on finite fields, in which GF (2) is usedw) Wherein 2 isw>K + m (where w is the length of the symbol, k is the number of data blocks, and m is the number of check blocks). RS coding uses a symbol as a coding and decoding unit, a large data block is split into symbols with a word length w (generally, 8 or 16 bits), and then the symbols are coded and decoded. Since data delay and data loss occur relatively infrequently, it is desirable that decoding is not required in the case where all data blocks arrive normally. The invention uses a coding mode called system code, and after coding, the data block contains original data information. Under such codingThe data frame is divided into k data blocks, and m check blocks are generated through coding. The k parity chunks have the same content as the original data, and when they reach the receiving end, the data can be directly submitted to the upper layer application without decoding operation. When data packet loss or network delay occurs, the check block can be used for decoding to recover data. The hardware unloading model of the network card erasure code selected by the invention is an asynchronous calculation model, which has higher efficiency, because when the network card calculates the erasure code, the process does not wait for the completion of calculation, but can calculate or execute other tasks, and the work flow is as follows:
(1) calling an encod _ async (data, code. -) interface, wherein the data is a data block and the code is a check block;
(2) the data block is sent to a receiving end without waiting for the completion of coding;
(3) asynchronously waiting for the encoding to complete, during which time the CPU can process other tasks;
(4) and after the coding is completed, the check block is sent to the receiving end node.
When the RDMA network card calculates the erasure codes, an encoding matrix and a decoding matrix need to be provided externally, and the encoding matrix and the decoding matrix need to be provided by an erasure code calculation library of a third party. Jerusure is a tool library supporting cross-platform soft erasure code calculation, and supports RS coding and cauchy RS coding of Van der Mond matrix. The present invention uses Jerusure to compute the matrix of the codec.
Claims (1)
1. An erasure code based RDMA network data transmission method, characterized in that the method comprises the steps of:
s1, the sending end acquires the applied data, divides the data into data frames with fixed size, divides each frame data into k data blocks with the same size, and sends the data blocks to the receiving end, specifically, the sending end copies the data in the applied memory into a buffer area one by one according to the frame size, and allocates a frame number for the frame data, and the frame number of each frame data is added with one; then, dividing a frame of data into k data blocks, and sending the data blocks to a receiving end in parallel by using k QPs;
s2, the sending end sets a timer for each frame of data, the timer module checks whether each sent data frame is overtime, if the data frame does not successfully reach the receiving end in k data after the overtime, the data frame is encoded through the erasure code module to generate m check blocks, and then the data transmission module is called to send the check blocks to the receiving end by using other m QPs;
s3, when the receiving end receives any k of the k + m data blocks, a frame of data is recovered, when the receiving end receives all the data frames, a data transmission is completed, wherein after the receiving end receives a data block, the receiving end identifies the data frame to which it belongs through the immediate number of the Write with estimate operation of RDMA, knows which data block or check block in the data frame according to the QP of the received data block, when the total number of the collected data block and check block of a data frame reaches k, the original content of the frame of data is obtained through the decoding of the erasure code, and finally the frame of data is added into the queue of the received data to wait for the reading of the application program; the receiving end reads the received data frames one by one according to the frame number, the receiving end records the frame number of the data frame to be read next by using a variable read _ pos, the read data is divided into a blocking type and a non-blocking type, the application program waits until the data frame with the frame number of read _ pos appears in the received data queue under the blocking type, if no data frame with the frame number of read _ pos exists in the received data queue under the non-blocking type, the data frame is directly returned, after the application program reads one frame of data from the receiving buffer area of the system, the read _ pos is added, and then the next frame of data is read until the buffer area of the user is full or the last frame of data transmission is encountered.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810487054.3A CN108631947B (en) | 2018-05-21 | 2018-05-21 | RDMA (remote direct memory Access) network data transmission method based on erasure codes |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810487054.3A CN108631947B (en) | 2018-05-21 | 2018-05-21 | RDMA (remote direct memory Access) network data transmission method based on erasure codes |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108631947A CN108631947A (en) | 2018-10-09 |
CN108631947B true CN108631947B (en) | 2021-06-25 |
Family
ID=63693962
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810487054.3A Active CN108631947B (en) | 2018-05-21 | 2018-05-21 | RDMA (remote direct memory Access) network data transmission method based on erasure codes |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108631947B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109412755B (en) * | 2018-11-05 | 2021-11-23 | 东方网力科技股份有限公司 | Multimedia data processing method, device and storage medium |
CN109861967A (en) * | 2018-12-10 | 2019-06-07 | 中国人民银行清算总中心 | Remote direct memory based on Spark Shuffle accesses system |
CN110113425A (en) * | 2019-05-16 | 2019-08-09 | 南京大学 | A kind of SiteServer LBS and equalization methods based on the unloading of RDMA network interface card correcting and eleting codes |
CN111782609B (en) * | 2020-05-22 | 2023-10-13 | 北京和瑞精湛医学检验实验室有限公司 | Method for rapidly and uniformly slicing fastq file |
CN113055434B (en) * | 2021-02-02 | 2022-07-15 | 浙江大华技术股份有限公司 | Data transmission method, electronic equipment and computer storage medium |
WO2023018779A1 (en) * | 2021-08-13 | 2023-02-16 | Intel Corporation | Remote direct memory access (rdma) support in cellular networks |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101282300B (en) * | 2008-03-03 | 2010-12-08 | 北京航空航天大学 | Method for processing HTTP packet based on non-blockage mechanism |
US9495324B2 (en) * | 2012-03-30 | 2016-11-15 | Intel Corporation | Efficient distribution of subnet administration data over an RDMA network |
CN106227617A (en) * | 2016-07-15 | 2016-12-14 | 乐视控股(北京)有限公司 | Self-repair method and storage system based on correcting and eleting codes algorithm |
CN107070923B (en) * | 2017-04-18 | 2020-07-28 | 上海云熵网络科技有限公司 | P2P live broadcast system and method for reducing code segment repetition |
CN107276722B (en) * | 2017-06-21 | 2020-01-03 | 北京奇艺世纪科技有限公司 | Data transmission method and system based on UDP |
CN107623646B (en) * | 2017-09-06 | 2020-11-17 | 华为技术有限公司 | Data stream transmission method, sending equipment and receiving equipment |
-
2018
- 2018-05-21 CN CN201810487054.3A patent/CN108631947B/en active Active
Non-Patent Citations (1)
Title |
---|
RDMAvisor: Toward Deploying Scalable and Simple RDMA as a Service in;Zhuzhong Qian, Baoliu Ye, Sanglu Lu等;《arXiv:1802.01870v1》;20180206;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN108631947A (en) | 2018-10-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108631947B (en) | RDMA (remote direct memory Access) network data transmission method based on erasure codes | |
CN108874307B (en) | Ethernet solid state drive system and method for providing storage unloading function | |
US9654419B2 (en) | Fabric channel control apparatus and method | |
US6724762B2 (en) | System and method for implementing multi-pathing data transfers in a system area network | |
US5583859A (en) | Data labeling technique for high performance protocol processing | |
US12119950B2 (en) | Early acknowledgment for write operations | |
US11722585B2 (en) | Reliable communications using a point to point protocol | |
KR100464195B1 (en) | Method and apparatus for providing a reliable protocol for transferring data | |
CN102118434A (en) | Data packet transmission method and device | |
CN112751644B (en) | Data transmission method, device and system and electronic equipment | |
US7305605B2 (en) | Storage system | |
CN111522656A (en) | Edge calculation data scheduling and distributing method | |
CN115314388A (en) | PRP protocol implementation method based on Bond mechanism | |
CN114401208B (en) | Data transmission method and device, electronic equipment and storage medium | |
WO2022105753A1 (en) | Network data encoding transmission method and apparatus | |
US7907546B1 (en) | Method and system for port negotiation | |
WO2024022243A1 (en) | Data transmission method, network device, computer device, and storage medium | |
US20230305713A1 (en) | Client and network based erasure code recovery | |
US7002966B2 (en) | Priority mechanism for link frame transmission and reception | |
JP6450283B2 (en) | Packet transmission system, packet transmission method, and transmission control apparatus | |
CN116804952A (en) | Erasing code recovery based on client and network | |
JPH04277948A (en) | Apparatus and method for retransmission data frame | |
Harris et al. | DAQ architecture and read-out protocole | |
Tong | Switching Considerations in Storage Networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20190722 Address after: 210093 Nanjing, Gulou District, Jiangsu, No. 22 Hankou Road Applicant after: Nanjing University Applicant after: Zhejiang Electric Power Co., Ltd. Applicant after: NARI Group Co. Ltd. Address before: 210093 Nanjing, Gulou District, Jiangsu, No. 22 Hankou Road Applicant before: Nanjing University |
|
GR01 | Patent grant | ||
GR01 | Patent grant |