CN102904729A - Intelligent boost network card supporting multiple applications according to protocol and port shunt - Google Patents
Intelligent boost network card supporting multiple applications according to protocol and port shunt Download PDFInfo
- Publication number
- CN102904729A CN102904729A CN201210417951XA CN201210417951A CN102904729A CN 102904729 A CN102904729 A CN 102904729A CN 201210417951X A CN201210417951X A CN 201210417951XA CN 201210417951 A CN201210417951 A CN 201210417951A CN 102904729 A CN102904729 A CN 102904729A
- Authority
- CN
- China
- Prior art keywords
- network interface
- packet
- intelligence
- interface card
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
The invention provides an intelligent boost network card supporting multiple applications according to a protocol and a port shunt. The intelligent boost network card comprises a shunt unit and a storage unit connected with the shunt unit. The intelligent boost network card supporting multiple applications according to the protocol and the port shunt, provided by the invention, solves the problems of overlarge treated band width and seriously influenced treatment efficiency.
Description
Technical field
The invention belongs to network communication field, be specifically related to a kind of intelligence acceleration network interface cards of supporting many application according to agreement, port shunting.
Background technology
The backbone network network bandwidth increases swift and violent in recent years, the internet, applications kind is increasing, the kind of network safety event and quantity have all presented exponential growth, in order to safeguard national security, ensure the fundamental interests of country, large quantities of class I DS (intruding detection system) for dissimilar defence object have been emerged.But because the restriction of the factors such as machine room space, heat radiation, cost is so that move flow that new operation system and reply newly increase by unlimited increase server and become impossiblely, and the many covers of operation operation systems are preferably solutions on the station server.
It is packet to be copied many parts then by a plurality of application the packet of copy is processed that thought is processed in traditional many application, the advantage of this mode is that each uses mutually noninterfere, the application coupling is few, but shortcoming also clearly, packet need to be copied many parts, the expense of copies data is huge, has seriously influenced the operation of upper-layer service system.
In order to address this problem, part manufacturer has proposed to support the zero duplication technologies of many application, and this technology is opened up a shared drive district at user's space, by this mechanism of memory-mapped, the data that network interface card is obtained are directly put in the user's space and are gone, and a plurality of users are by the mechanism works of shared buffer.This mode has been avoided the repeatedly copy of packet, has improved to a certain extent the efficient of processing, but each application still needs to process all packets.
Yet in the processing procedure of reality, using often needs to take different processing modes according to flow dissimilar, and tupe is not before classified by type to flow, cause each application will process all flows, for example, the application of analyzing for the IPv4TCP flow is fully without the flow of analyzing IP v4UPD and IPv6, and this will reduce the treatment effeciency of application greatly.
Existingly support that at a station server technology of a plurality of class IDS operation systems mainly is the multiple duplication technology of packet and the buffering area that adopts zero-copy overall situation technology of sharing.
The multiple duplication technology of packet is to many parts of the data Replicas that receive, and be uploaded to respectively different application, coupling between this mode can be avoided using, make the non-interfering independent process packet of a plurality of application, but shortcoming also clearly, packet need to be copied many parts, the expense of copies data is huge, has seriously influenced the operation of upper-layer service system;
Adopting the buffering area overall situation technology of sharing of zero-copy is to open up a shared drive district at user's space, by this mechanism of memory-mapped, the direct DMA of data that network interface card is obtained goes in user's space, a plurality of users are by the packet of the machine-processed shared buffer of shared buffer, reception and release by driver control data, this mode has been avoided the repeatedly copy of packet, improved to a certain extent the efficient of processing, but using, each still needs to process all packets, and a lot of the application is based on specific traffic characteristic and analyzes in the reality, processing to uncorrelated flow does not only have effect, on the contrary can be excessive because of the bandwidth of processing, have a strong impact on the efficient of processing.
Summary of the invention
For overcoming defects, the invention provides a kind of intelligence acceleration network interface cards of supporting many application according to agreement, port shunting, solved the bandwidth of processing excessive, have a strong impact on the problem of the efficient of processing.
For achieving the above object, the invention provides a kind of the shunting according to agreement, port and support that the intelligence acceleration network interface cards of using, its improvements are more, described network interface card comprises dividing cell and connected memory cell.
In the optimal technical scheme provided by the invention, described dividing cell comprises: the register, rule match module and the load balancing computing module that connect successively.
In the second optimal technical scheme provided by the invention, described memory cell is used for buffered data packet and deposits the shunting rule.
In the 3rd optimal technical scheme provided by the invention, described register is used for loading the shunting rule of depositing in described memory cell.
In the 4th optimal technical scheme provided by the invention, described rule match module, according to the shunting of reading rule, to the packet that receives with shunt rule match, and source IP and the purpose IP of the packet that the match is successful is delivered to described load balancing computing module.
In the 5th optimal technical scheme provided by the invention, described load balancing computing module, according to the buffering area ID that calculates ownership, then with this packet DMA in this buffering area.
In the 6th optimal technical scheme provided by the invention, following parameter is set: source IP address ip, purpose IP address dip, intermediateness value r and end-state value r2, computational process is as follows:
(a). source IP address ip and purpose IP address dip do XOR, obtain intermediateness value r;
(b). r move to right 12 the value of value after 4 and r that moves right is done XOR, obtain end-state value r2;
(c). end-state value r2 and 0x0000ffff are carried out and computing obtain end-state value r2 minimum 16;
Wherein, minimum 16 of end-state value r2 is the hash value of packet.
In the 7th optimal technical scheme provided by the invention, described dividing cell, adopting model is the fpga chip of XC5VLX110T.
In the 8th optimal technical scheme provided by the invention, described memory cell employing capacity is the DDR3 memory of 4G.
Compared with the prior art, a kind of intelligence acceleration network interface cards of supporting many application according to agreement, port shunting provided by the invention, can be according to the needs of operation system, the buffering area of the assignment of traffic of respective protocol and port to appointment will be satisfied, operation system only need to be fetched data from this buffering area and be got final product, the mechanism that this accelerator card adopts buffering area to share is supported many application, and is supported use that priority is set more.Because each professional only need to own relevant flow, thereby reduced the bandwidth of this service needed processing, improved the efficient of Business Processing, effectively reduced scale of investment.
Description of drawings
Fig. 1 supports the intelligence of using to accelerate the structural representation of network interface card according to agreement, port shunting more.
Fig. 2 is the structural representation of dividing cell.
Embodiment
As shown in Figure 1, shunting supports the intelligence of using to accelerate network interface cards more according to agreement, port, and its improvements are, described network interface card comprises dividing cell and connected memory cell.
As shown in Figure 2, described dividing cell comprises: the register, rule match module and the load balancing computing module that connect successively.
Described memory cell is used for buffered data packet and deposits the shunting rule.
Described register is used for loading the shunting rule of depositing in described memory cell.
Described rule match module, according to the shunting of reading rule, to the packet that receives with shunt rule match, and source IP and the purpose IP of the packet that the match is successful is delivered to described load balancing computing module.
Described load balancing computing module calculates according to following formula, the buffering area ID that obtains belonging to, then with this packet DMA in this buffering area.
The load balancing computing formula:
With two values of sip (source IP address), dip (purpose IP address) as parameter
inline?int?getHashValue(__u32sip,__u32dip)
{
int?r,r2;
Calculation procedure one: XOR is done in source IP address and purpose IP address, obtains intermediateness value r
r=sip∧dip;
Calculation procedure two: the values that r moves right after 4 are done XOR with move to right 12 value of r again, obtain with state value r2
r2=(r>>4)∧(r>>12);
Calculation procedure three: with r2 and 0x0000ffff, namely obtain r2 minimum 16
retu?rn?r2&0x0000ffff;
}
Wherein, minimum 16 of end-state value r2 is the hash value of packet.
The source I P identical by above formula will obtain identical hash value with purpose I P, and the packet of identical hash value will be with DMA in same buffering area.
Described dividing cell, adopting model is the fpga chip of XC5VLX110T.
Described memory cell employing capacity is the DDR3 memory of 4G.
Be further described supporting the intelligence of using to accelerate network interface card according to agreement, port shunting by following examples more.
A can be according to agreement, port shunting and the support intelligence of using are accelerated network interface cards more, this card be one based on the PCIe interface card insert type product of fpga chip, this card can be according to three layer protocols, the attributes such as four layer protocols and port are classified to network message, the packet that satisfies the different attribute kind can be assigned to different buffering areas, a plurality of buffering areas have been realized the load balancing shunting of homology chummage by source IP and purpose IP, the packet that has guaranteed same connection always is assigned to a buffering area, this card has been realized the function of IPv4 and the two stacks of IPv6 in addition, but the packet of reception ﹠ disposal IPv4 and ipv6 traffic.The upper-layer service system is according to the traffic characteristic of its needs, select relative buffering area to fetch data to process to get final product, select several buffering areas fully by application program from master control, be responsible for safeguarding that by the driver of card each application gets package informatin, a plurality of application are carried out differential support according to the priority of its setting, guarantee that according to the priority of operation system bag is preferentially got in the application of high priority.
In order to improve the throughput of moving a plurality of operation systems at a station server; improve the treatment effeciency of multiservice system; satisfy day by day increasing and the supervision of increasingly sophisticated flow; protection netizen's legitimate rights and interests; safeguard the harmonious internet environment; Dawning has researched and developed a according to agreement; port shunting and the support intelligence of using are accelerated network interface cards more; this card core component is a special FPGA chip; logical circuit by chip internal is realized according to agreement; port is shunted; this card also provides support to many application simultaneously; a plurality of application can realize by the side of shared buffer formation sharing; avoided the repeatedly copy of packet; this card is supported IPv4 and the two mode stacks of IPv6, and great autgmentability is provided.
Integrated 4GB internal memory on the whole intelligent accelerator card, be responsible for the buffering high speed packet and deposit the shunting rule, the packet that hits the shunting rule will be by DMA in corresponding buffering area formation, if the corresponding a plurality of buffering area formations of a shunting rule, the fpga chip of card can be selected a buffering area formation according to source IP and the purpose IP of packet, the algorithm of selecting can guarantee a plurality of buffering area formation load balancing, the packet that guarantees again same connection all the time DMA in same buffering area formation, whole shunting is hit process fully by the control of the acp chip on the card, do not take any host resource, so when keeping high efficiency shunting, can also leave whole computational resources of main frame for operation system.
Operation system is according to the traffic characteristic of its needs, select relative buffering area to fetch data to process to get final product, select several buffering areas fully by application program from master control, get the bag formation by what the driver of card was responsible for safeguarding each operation system, a plurality of application are carried out differential support according to the priority of its setting, keep the application of high priority preferentially to get bag according to the priority of operation system.
The shunting rule of accelerator card is set, can will shunt rule according to configuration file when accelerator card loads and be loaded into the accelerator card register, packet arrives behind the accelerator card and the shunting rule match, carrying out load balancing after the coupling calculates again, finally obtain the buffering area ID that belongs to, then accelerator card with this packet DMA in this buffering area.Support the drivers of using to safeguard each shared buffer formation according to the priority of each application more.The corresponding buffering area of traffic characteristic that the operation system selection needs, and from buffering area, get to wrap and analyze, the number of selecting buffering area fully and type controlled by operation system fully, by this mechanism, each operation system maskable falls irrelevant data, this will improve the throughput of total system greatly, reduce the scale of operation system.
What need statement is that content of the present invention and embodiment are intended to prove the practical application of technical scheme provided by the present invention, should not be construed as the restriction to protection range of the present invention.Those skilled in the art can do various modifications, be equal to and replace or improve inspired by the spirit and principles of the present invention.But these changes or modification are all in the protection range that application is awaited the reply.
Claims (9)
1. intelligence acceleration network interface cards of supporting many application according to agreement, port shunting is characterized in that, described network interface card comprises dividing cell and connected memory cell.
2. intelligence according to claim 1 is accelerated network interface card, it is characterized in that, described dividing cell comprises: the register, rule match module and the load balancing computing module that connect successively.
3. intelligence according to claim 1 is accelerated network interface card, it is characterized in that described memory cell is used for buffered data packet and deposits the shunting rule.
4. intelligence according to claim 2 is accelerated network interface card, it is characterized in that described register is used for loading the shunting rule of depositing in described memory cell.
5. intelligence according to claim 2 is accelerated network interface card, it is characterized in that, described rule match module is according to the shunting rule that reads, to the packet that receives with shunt rule match, and source IP and the purpose IP of the packet that the match is successful is delivered to described load balancing computing module.
6. intelligence according to claim 1 is accelerated network interface card, it is characterized in that, described load balancing computing module, according to the buffering area ID that calculates ownership, then with this packet DMA in this buffering area.
7. intelligence according to claim 6 is accelerated network interface card, it is characterized in that, following parameter is set: source IP address ip, purpose IP address dip, intermediateness value r and end-state value r2, and computational process is as follows:
(a). source IP address ip and purpose IP address dip do XOR, obtain intermediateness value r;
(b). r move to right 12 the value of value after 4 and r that moves right is done XOR, obtain end-state value r2;
(c). end-state value r2 and 0x0000ffff are carried out and computing obtain end-state value r2 minimum 16;
Wherein, minimum 16 of end-state value r2 is the hash value of packet.
8. according to claim 1,2 described intelligence accelerate network interface cards, it is characterized in that, described dividing cell, adopting model is the fpga chip of XC5VLX110T.
9. according to claim 1,2 described intelligence accelerate network interface cards, it is characterized in that, described memory cell employing capacity is the DDR3 memory of 4G.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210417951.XA CN102904729B (en) | 2012-10-26 | 2012-10-26 | The intelligent acceleration network card of more applications is supported according to agreement, port shunt |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210417951.XA CN102904729B (en) | 2012-10-26 | 2012-10-26 | The intelligent acceleration network card of more applications is supported according to agreement, port shunt |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102904729A true CN102904729A (en) | 2013-01-30 |
CN102904729B CN102904729B (en) | 2018-05-01 |
Family
ID=47576783
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210417951.XA Active CN102904729B (en) | 2012-10-26 | 2012-10-26 | The intelligent acceleration network card of more applications is supported according to agreement, port shunt |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102904729B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103501548A (en) * | 2013-09-06 | 2014-01-08 | 大连理工大学 | Multipriority-oriented data hardware buffering wireless communication network card |
CN103763198A (en) * | 2013-11-15 | 2014-04-30 | 武汉绿色网络信息服务有限责任公司 | Data packet classification method |
WO2015113437A1 (en) * | 2014-01-29 | 2015-08-06 | 华为技术有限公司 | Data packet processing method and device based on parallel protocol stack instances |
CN104954283A (en) * | 2014-03-31 | 2015-09-30 | 中国电信股份有限公司 | Dual-stack differentiated scheduling method and device |
CN105704059A (en) * | 2016-03-31 | 2016-06-22 | 北京百卓网络技术有限公司 | Load balancing method and load balancing system |
CN105763617A (en) * | 2016-03-31 | 2016-07-13 | 北京百卓网络技术有限公司 | Load balancing method and system |
CN106371925A (en) * | 2016-08-31 | 2017-02-01 | 北京中测安华科技有限公司 | High-speed big data detection method and device |
CN107193657A (en) * | 2017-05-18 | 2017-09-22 | 安徽磐众信息科技有限公司 | Low latency server based on SOLAFLARE network interface cards |
CN109783409A (en) * | 2019-01-24 | 2019-05-21 | 北京百度网讯科技有限公司 | Method and apparatus for handling data |
WO2019129167A1 (en) * | 2017-12-29 | 2019-07-04 | 华为技术有限公司 | Method for processing data packet and network card |
CN110177083A (en) * | 2019-04-26 | 2019-08-27 | 阿里巴巴集团控股有限公司 | A kind of network interface card, data transmission/method of reseptance and equipment |
CN110300081A (en) * | 2018-03-21 | 2019-10-01 | 大唐移动通信设备有限公司 | A kind of method and apparatus of data transmission |
CN110417675A (en) * | 2019-07-29 | 2019-11-05 | 广州竞远安全技术股份有限公司 | The network shunt method, apparatus and system of high-performance probe under a kind of SOC |
CN110768907A (en) * | 2019-09-12 | 2020-02-07 | 苏州浪潮智能科技有限公司 | Method, device and medium for managing FPGA heterogeneous accelerator card cluster |
US11082410B2 (en) | 2019-04-26 | 2021-08-03 | Advanced New Technologies Co., Ltd. | Data transceiving operations and devices |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080045342A1 (en) * | 2003-03-05 | 2008-02-21 | Bally Gaming, Inc. | Data Integrity and Non-Repudiation |
CN101540727A (en) * | 2009-05-05 | 2009-09-23 | 曙光信息产业(北京)有限公司 | Hardware shunt method of IP report |
CN102387219A (en) * | 2011-12-13 | 2012-03-21 | 曙光信息产业(北京)有限公司 | Multi-network-card load balancing system and method |
CN102497298A (en) * | 2011-12-19 | 2012-06-13 | 曙光信息产业(北京)有限公司 | Network audit equipment and method based on flow statistic network card |
CN102752119A (en) * | 2012-07-09 | 2012-10-24 | 南京中兴特种软件有限责任公司 | Interface realizing method for intelligent network card |
-
2012
- 2012-10-26 CN CN201210417951.XA patent/CN102904729B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080045342A1 (en) * | 2003-03-05 | 2008-02-21 | Bally Gaming, Inc. | Data Integrity and Non-Repudiation |
CN101540727A (en) * | 2009-05-05 | 2009-09-23 | 曙光信息产业(北京)有限公司 | Hardware shunt method of IP report |
CN102387219A (en) * | 2011-12-13 | 2012-03-21 | 曙光信息产业(北京)有限公司 | Multi-network-card load balancing system and method |
CN102497298A (en) * | 2011-12-19 | 2012-06-13 | 曙光信息产业(北京)有限公司 | Network audit equipment and method based on flow statistic network card |
CN102752119A (en) * | 2012-07-09 | 2012-10-24 | 南京中兴特种软件有限责任公司 | Interface realizing method for intelligent network card |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103501548B (en) * | 2013-09-06 | 2016-05-11 | 大连理工大学 | Towards the radio communication network interface card of multipriority data hardware buffer |
CN103501548A (en) * | 2013-09-06 | 2014-01-08 | 大连理工大学 | Multipriority-oriented data hardware buffering wireless communication network card |
CN103763198A (en) * | 2013-11-15 | 2014-04-30 | 武汉绿色网络信息服务有限责任公司 | Data packet classification method |
CN103763198B (en) * | 2013-11-15 | 2016-08-17 | 武汉绿色网络信息服务有限责任公司 | A kind of data packet classification method |
WO2015113437A1 (en) * | 2014-01-29 | 2015-08-06 | 华为技术有限公司 | Data packet processing method and device based on parallel protocol stack instances |
US10218820B2 (en) | 2014-01-29 | 2019-02-26 | Huawei Technologies Co., Ltd. | Method and apparatus for processing data packet based on parallel protocol stack instances |
CN104954283B (en) * | 2014-03-31 | 2018-10-19 | 中国电信股份有限公司 | A kind of double stack differentiation dispatching methods and device |
CN104954283A (en) * | 2014-03-31 | 2015-09-30 | 中国电信股份有限公司 | Dual-stack differentiated scheduling method and device |
CN105704059A (en) * | 2016-03-31 | 2016-06-22 | 北京百卓网络技术有限公司 | Load balancing method and load balancing system |
CN105763617A (en) * | 2016-03-31 | 2016-07-13 | 北京百卓网络技术有限公司 | Load balancing method and system |
CN105763617B (en) * | 2016-03-31 | 2019-08-02 | 北京百卓网络技术有限公司 | A kind of load-balancing method and system |
CN106371925A (en) * | 2016-08-31 | 2017-02-01 | 北京中测安华科技有限公司 | High-speed big data detection method and device |
CN107193657A (en) * | 2017-05-18 | 2017-09-22 | 安徽磐众信息科技有限公司 | Low latency server based on SOLAFLARE network interface cards |
WO2019129167A1 (en) * | 2017-12-29 | 2019-07-04 | 华为技术有限公司 | Method for processing data packet and network card |
CN110300081A (en) * | 2018-03-21 | 2019-10-01 | 大唐移动通信设备有限公司 | A kind of method and apparatus of data transmission |
CN109783409A (en) * | 2019-01-24 | 2019-05-21 | 北京百度网讯科技有限公司 | Method and apparatus for handling data |
CN110177083A (en) * | 2019-04-26 | 2019-08-27 | 阿里巴巴集团控股有限公司 | A kind of network interface card, data transmission/method of reseptance and equipment |
CN110177083B (en) * | 2019-04-26 | 2021-07-06 | 创新先进技术有限公司 | Network card, data sending/receiving method and equipment |
US11082410B2 (en) | 2019-04-26 | 2021-08-03 | Advanced New Technologies Co., Ltd. | Data transceiving operations and devices |
CN110417675A (en) * | 2019-07-29 | 2019-11-05 | 广州竞远安全技术股份有限公司 | The network shunt method, apparatus and system of high-performance probe under a kind of SOC |
CN110768907A (en) * | 2019-09-12 | 2020-02-07 | 苏州浪潮智能科技有限公司 | Method, device and medium for managing FPGA heterogeneous accelerator card cluster |
Also Published As
Publication number | Publication date |
---|---|
CN102904729B (en) | 2018-05-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102904729A (en) | Intelligent boost network card supporting multiple applications according to protocol and port shunt | |
WO2023087938A1 (en) | Data processing method, programmable network card device, physical server, and storage medium | |
CN101771627B (en) | Equipment and method for analyzing and controlling node real-time deep packet on internet | |
WO2022225639A1 (en) | Service mesh offload to network devices | |
CN102904730A (en) | Intelligent acceleration network card capable of filtering and picking traffic according to protocol, port and IP address | |
CN103081434A (en) | Smart memory | |
CN104205080A (en) | Offloading packet processing for networking device virtualization | |
CN101873337A (en) | Zero-copy data capture technology based on rt8169 gigabit net card and Linux operating system | |
CN109981403A (en) | Virtual machine network data traffic monitoring method and device | |
CN104572574A (en) | GigE (gigabit Ethernet) vision protocol-based Ethernet controller IP (Internet protocol) core and method | |
CN104144156A (en) | Message processing method and device | |
CN103491535B (en) | The general approximate enquiring method of secret protection of facing sensing device network | |
US20220247696A1 (en) | Reliable transport offloaded to network devices | |
CN104461979A (en) | Multi-core on-chip communication network realization method based on ring bus | |
CN102970190B (en) | Network traffic monitoring system | |
CN116266827A (en) | Programming packet processing pipeline | |
CN102790773A (en) | Method for realizing firewall in household gateway | |
US11126249B1 (en) | Power reduction methods for variable sized tables | |
CN103731364B (en) | X86 platform based method for achieving trillion traffic rapid packaging | |
CN117529904A (en) | Packet format adjustment technique | |
CN105579952B (en) | The EMI on high-speed channel to be paused using puppet is inhibited | |
CN115118668A (en) | Flow control techniques | |
Li et al. | The comparison and verification of some efficient packet capture and processing technologies | |
CN103441952A (en) | Network data package processing method based on multi-core or many-core embedded processor | |
US10877911B1 (en) | Pattern generation using a direct memory access engine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20220727 Address after: 100193 No. 36 Building, No. 8 Hospital, Wangxi Road, Haidian District, Beijing Patentee after: Dawning Information Industry (Beijing) Co.,Ltd. Patentee after: DAWNING INFORMATION INDUSTRY Co.,Ltd. Address before: 100193 No.36 Zhongguancun Software Park, No.8 Dongbeiwang West Road, Haidian District, Beijing Patentee before: Dawning Information Industry (Beijing) Co.,Ltd. |
|
TR01 | Transfer of patent right |