CN102769575A - Flow load balancing method for intelligent network card - Google Patents

Flow load balancing method for intelligent network card Download PDF

Info

Publication number
CN102769575A
CN102769575A CN2012102815544A CN201210281554A CN102769575A CN 102769575 A CN102769575 A CN 102769575A CN 2012102815544 A CN2012102815544 A CN 2012102815544A CN 201210281554 A CN201210281554 A CN 201210281554A CN 102769575 A CN102769575 A CN 102769575A
Authority
CN
China
Prior art keywords
packet
flow
algorithm
intelligent network
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012102815544A
Other languages
Chinese (zh)
Inventor
周立
鲁松
邹昕
汪立东
张晓明
王维晟
王勇
周青
李建昂
孙传明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NANJING ZHONGXING SPECIAL SOFTWARE CO Ltd
National Computer Network and Information Security Management Center
Original Assignee
NANJING ZHONGXING SPECIAL SOFTWARE CO Ltd
National Computer Network and Information Security Management Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NANJING ZHONGXING SPECIAL SOFTWARE CO Ltd, National Computer Network and Information Security Management Center filed Critical NANJING ZHONGXING SPECIAL SOFTWARE CO Ltd
Priority to CN2012102815544A priority Critical patent/CN102769575A/en
Publication of CN102769575A publication Critical patent/CN102769575A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a flow load balancing method for an intelligent network card. When a server receives data, the intelligent network card transmits network flow to a shunt control module according to different shunting modes, which include receiving any combination of quintuple in a data packet, a protocol and a dynamic field of a UDP (user datagram protocol) load, configured by a user, then a shunt number is calculated out, and the data packet and the shunt number are transmitted to a data processing module, data packet pretreatment is carried out, and finally the data packet, the shunt number and a preprocessing result are transmitted to a DMA (direct memory access) module, and the data packet is received by a corresponding cache of a host computer by DMA processing for application or thread extraction treatment of the host computer. By utilizing the flow load balancing method disclosed by the invention, the problems that the performance resource of a server processor is wasted, a lightweight tread cannot be achieved, function expansion is troublesome and application burden is overlarge caused by a traditional method can be avoided.

Description

A kind of flow load balance method that is used for intelligent network adapter
Technical field
The present invention relates to computer realm, relate in particular to a kind of flow load balance method that is used for intelligent network adapter.
 
Background technology
At present, along with information technology and network application sharply develop with geometry character and astonishing speed, the importance of network analysis is more and more obvious.And simultaneously, the application of multiple-core server also more and more widely.The legacy network analytical equipment generally is made up of the corresponding software of multinuclear X86 server and upward operation thereof.
The software architecture of traditional performance polycaryon processor advantage is that all bags that the server network interface card is received evenly are divided into many streams based on the shunting of IP address, only consider the source destination address, do not consider the type of service of packet.And network analysis software plays many threads, and every thread is equal, and processing professional identical receives a data packet stream and handle.
Though traditional way has been brought into play the parallel processing capability of server polycaryon processor, every thread must possess global function, handles full-service.More and more abundanter in network application from now on, under the situation that network environment becomes increasingly complex, every expansion is to a kind of support of business, and all threads all can increase the weight of a burden layer simultaneously.So over-burden for the thread under this software architecture, can't realize the light weight thread, its performance be brought into play and the Thread Count identical can only be opened at most with the server check figure, can't realize function expansion easily.Waste too much processor-server performance resource, under the bigger situation of network traffics, occur the packet loss phenomenon easily.
?
Summary of the invention
The present invention is directed to according to IP bag source destination address and average shunting; To each thread of multiple-core server, handle full-service packet mean allocation; Cause wasting processor-server performance resource, can't realize the lightweight thread, function expansion trouble; Use the problem that over-burden, propose a kind of flow load balance method that is used for intelligent network adapter.This method is based on the intelligent network adapter technology, and intelligent network adapter adopts multi-core network processor, and handling property is high, and the software development flexible configuration can fully be developed according to user's requirement.Intelligent network adapter itself is divided into Network Interface Module, diverter module, data processing module, dma module in the method, and it is powerful to have given full play to the intelligent network adapter software function, the advantage of rich hardware resource.
Technical scheme of the present invention is:
Intelligent network adapter with the network traffics of receiving according to user configured different shunting modes; The dynamic field that comprises the combination in any that receives five-tuple in the packet, agreement, UDP load; Delivering to the flow-dividing control module shunts; Calculate the branch stream number, the corresponding handling procedure of delivering on the server that patches network interface card is handled.The method of the invention specifically comprises following steps:
1) intelligent network adapter load driver is the thread distributing independent packet receiving buffer memory of each upper layer application or application, comprises first address, a cover read-write pointer of packet receiving buffer memory, all is configured in the network interface card relevant register; The thread that is each upper layer application or application simultaneously distributes corresponding D MA resource;
2) according to the demand of each application or thread, at the corresponding flow algorithm that divides of background server end configuration, identical shunting algorithm application or thread are formed a shunt group, and flow equalization distributes between the same shunt group; Each shunts algorithm configuration has match query condition, inquiry finish condition and priority;
3) after intelligent network adapter is received packet, in the flow-dividing control module, come the filtering data bag according to the condition of minute flow algorithm; Judge the branch flow algorithm of current data packet; Begin inquiry from the high branch flow algorithm of priority,, then deliver to corresponding shunt group if satisfy the matching condition that high advantage level is divided flow algorithm; If do not satisfy, then this priority shunting algorithm queries termination gets into next step flow process.
4) the branch flow algorithm that reduces priority is successively repeated 3) said step, up to the branch flow algorithm of having inquired about all application configuration, packet is carried out the branch flow algorithm of acquiescence;
5) hit the branch flow algorithm for packet after, diverter module is sent corresponding branch stream number and the packet of this minute flow algorithm into data processing module simultaneously;
6) data processing module carries out preliminary treatment to packet, the work of treatment of part bag is placed on the network interface card accomplishes; Its processing procedure is disposed by the user, according to the corresponding processing mode of difference shunting algorithm configuration, comprises stream reduction, content match, agreement recognized action;
7) after the preliminary treatment of packet is accomplished; The intelligent network adapter data processing module is with packet, preliminary treatment result and divide stream number all to deliver to dma module; Dma module is selected corresponding main frame buffer memory according to a minute stream number, carries out dma operation; Deliver to packet and preliminary treatment result on the main frame, and upgrade corresponding read-write pointer notice main frame.
8) different application of moving on the main frame or thread are obtained corresponding packet on branch stream number separately.And obtain corresponding preliminary treatment result simultaneously, quicken processing procedure.
Flow algorithm can be by each application or thread separate configurations in of the present invention minute.
Among the present invention, dispose identical shunting algorithm application or thread and form a shunt group.Packet is dispensed on the group stream number of interior each minute by flow equalization in the shunt group.Exist priority different the branch between the flow algorithm, and each divides flow algorithm that the inquiry finish condition is set simultaneously.The network interface card diverter module divides a flow algorithm step by step to inquire about until acquiescence to each packet from the high priority to the low priority, hits directly will divide stream number and packet to deliver to the follow-up data processing module behind the branch flow algorithm to carry out the packet preliminary treatment.
Among the present invention, data processing module carries out preliminary treatment to packet.Pretreatment process is confirmed according to the branch flow algorithm of oneself by each application or thread.
Among the present invention, the branch stream number that dma module is confirmed based on diverter module is chosen corresponding packet receiving buffer memory, initiates dma operation, sends in the host memory and the corresponding read-write of modification pointer notice host application program or thread.Host application program is directly obtained the packet that oneself will handle in distributing to the packet receiving buffering area of oneself.
Beneficial effect of the present invention:
The present invention is based on server configures and the streambuf territory of a special use is provided, simultaneously, use or thread can dispose proprietary branch flow algorithm based on own needs for each upper layer software (applications) thread of supporting.Intelligent network adapter can be delivered to the packet that hits corresponding minute flow algorithm in the packet receiving buffering area of corresponding branch stream number, and supply is used or the thread extraction is used.And intelligent network adapter can also carry out alleviating the burden of application or thread greatly such as pretreatment operation such as stream reduction, content match, agreement identifications to packet.
Among the present invention, utilize flow load balance method of the present invention, can avoid conventional method to cause wasting processor-server performance resource, can't realize the lightweight thread, function expansion trouble is used the problem that over-burden.
 
Description of drawings
Fig. 1 is a system configuration sketch map of the present invention.
Fig. 2 is a diverter module sketch map of the present invention.
 
Embodiment
Below in conjunction with accompanying drawing and embodiment the present invention is further described.This method comprises following steps:
1) intelligent network adapter drives when loading, and is the thread distributing independent packet receiving buffer memory of each upper layer application or application, and corresponding D MA resource.Comprise first address, a cover read-write pointer of packet receiving buffer memory, all be configured in the network interface card relevant register.
2) each application or thread can be according to the demands of oneself, the own required branch flow algorithm of configuration.Identical shunting algorithm application or thread can be formed a shunt group, and flow equalization distributes between the same shunt group.The shunting algorithm configuration has priority and inquiry finish condition.
3) receive packet after, intelligent network adapter carries out the shunting of high priority earlier to be handled, and then delivers to corresponding shunt group if satisfy condition.If do not satisfy (promptly satisfying the inquiry finish condition), then get into next step flow process.
4) the branch flow algorithm to low priority repeats 3) said step, up to the branch flow algorithm of having inquired about all application configuration, packet is carried out the branch flow algorithm of acquiescence.
5) after packet has hit the branch flow algorithm, diverter module will divide stream number and packet to send into data processing module simultaneously.
6) data processing module carries out preliminary treatment to packet, its objective is part bag work of treatment is placed on the network interface card to accomplish, and alleviates the burden of host application.Its processing procedure is disposed by the user, can shunt the different processing mode of algorithm configuration based on difference, comprises actions such as stream reduction, content match, agreement identification.
7) after the preliminary treatment of packet was accomplished, the intelligent network adapter data processing module was with packet, preliminary treatment result and divide stream number all to deliver to dma module.Dma module is selected corresponding main frame buffer memory according to a minute stream number, carries out dma operation, delivers to packet and preliminary treatment result on the main frame.And upgrade corresponding read-write pointer and notify main frame.
8) different application of moving on the main frame or thread are obtained corresponding packet on branch stream number separately.And obtain corresponding preliminary treatment result simultaneously, quicken processing procedure.
The present invention is based on server configures and the streambuf territory of a special use is provided, simultaneously, use or thread can dispose proprietary branch flow algorithm based on own needs for each upper layer software (applications) thread of supporting.Intelligent network adapter can be delivered to the packet that hits corresponding minute flow algorithm in the packet receiving buffering area of corresponding branch stream number, and supply is used or the thread extraction is used.And intelligent network adapter can also carry out alleviating the burden of application or thread greatly such as pretreatment operation such as stream reduction, content match, agreement identifications to packet.
Utilize flow load balance method of the present invention, can avoid conventional method to cause wasting processor-server performance resource, can't realize the lightweight thread, function expansion trouble is used the problem that over-burden.
The present invention does not relate to all identical with the prior art prior art that maybe can adopt of part and realizes.

Claims (7)

1. flow load balance method that is used for intelligent network adapter, said intelligent network adapter, it comprises following content:
Network Interface Module, flow-dividing control module, data processing module, dma module; Wherein flow-dividing control module, data processing module on hardware all based on multi-core network processor.
2. the flow load balance method that is used for intelligent network adapter according to claim 1 is characterized in that:
Intelligent network adapter with the network traffics of receiving according to user configured different shunting modes; The dynamic field that comprises the combination in any that receives five-tuple in the packet, agreement, UDP load; Sending to the flow-dividing control module shunts; Identify the branch flow algorithm and divide stream number, send on the server that patches intelligent network adapter with each minute stream number corresponding processing program handle.
3. according to the described flow load balance method that is used for intelligent network adapter of claim 1~2, it may further comprise the steps:
1) intelligent network adapter load driver is the thread distributing independent packet receiving buffer memory of each upper layer application or application, comprises first address, a cover read-write pointer of packet receiving buffer memory, all is configured in the network interface card relevant register; The thread that is each upper layer application or application simultaneously distributes corresponding D MA resource;
2) according to the demand of each application or thread, at the corresponding flow algorithm that divides of background server end configuration, identical shunting algorithm application or thread are formed a shunt group, and flow equalization distributes between the same shunt group; Each shunts algorithm configuration has match query condition, inquiry finish condition and priority;
3) after intelligent network adapter is received packet, in the flow-dividing control module, come the filtering data bag according to the condition of minute flow algorithm; Judge the branch flow algorithm of current data packet; Begin inquiry from the high branch flow algorithm of priority,, then deliver to corresponding shunt group if satisfy the matching condition that high advantage level is divided flow algorithm; If do not satisfy, then this priority shunting algorithm queries termination gets into next step flow process;
4) the branch flow algorithm that reduces priority is successively repeated 3) said step, up to the branch flow algorithm of having inquired about all application configuration, packet is carried out the branch flow algorithm of acquiescence;
5) hit the branch flow algorithm for packet after, diverter module is sent corresponding branch stream number and the packet of this minute flow algorithm into data processing module simultaneously;
6) data processing module carries out preliminary treatment to packet, the work of treatment of part bag is placed on the network interface card accomplishes; Its processing procedure is disposed by the user, according to the corresponding processing mode of difference shunting algorithm configuration, comprises stream reduction, content match, agreement recognized action;
7) after the preliminary treatment of packet is accomplished; The intelligent network adapter data processing module is with packet, preliminary treatment result and divide stream number all to deliver to dma module; Dma module is according to a minute stream number; Select corresponding main frame buffer memory; Carry out dma operation; Packet and preliminary treatment result are delivered on the main frame, and upgraded corresponding read-write pointer notice main frame;
8) different application of moving on the main frame or thread are obtained corresponding packet on branch stream number separately, and obtain corresponding preliminary treatment result simultaneously, quicken processing procedure.
4. the flow load balance method that is used for intelligent network adapter according to claim 3 is characterized in that:
Divide flow algorithm by each application or thread separate configurations.
5. the flow load balance method that is used for intelligent network adapter according to claim 3 is characterized in that:
Dispose identical shunting algorithm application or thread and form a shunt group; Packet is dispensed on the group stream number of interior each minute by flow equalization in the shunt group; Exist priority different the branch between the flow algorithm; Each divides flow algorithm that match query condition and inquiry finish condition are set simultaneously; The network interface card diverter module divides a flow algorithm step by step to inquire about until acquiescence to each packet from the high priority to the low priority, hits directly will divide stream number and packet to deliver to the follow-up data processing module behind the branch flow algorithm to carry out the packet preliminary treatment.
6. the flow load balance method that is used for intelligent network adapter according to claim 3 is characterized in that:
Data processing module carries out preliminary treatment to packet, and pretreatment process is confirmed according to the branch flow algorithm that has disposed by each application or thread.
7. the flow load balance method that is used for intelligent network adapter according to claim 3 is characterized in that:
The branch stream number that dma module is confirmed according to diverter module; Choose corresponding packet receiving buffer memory; Initiate dma operation; Send in the host memory and the corresponding read-write of modification pointer notice host application program or thread, host application program is directly obtained the packet that oneself will handle in distributing to the packet receiving buffering area of oneself.
CN2012102815544A 2012-08-08 2012-08-08 Flow load balancing method for intelligent network card Pending CN102769575A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012102815544A CN102769575A (en) 2012-08-08 2012-08-08 Flow load balancing method for intelligent network card

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012102815544A CN102769575A (en) 2012-08-08 2012-08-08 Flow load balancing method for intelligent network card

Publications (1)

Publication Number Publication Date
CN102769575A true CN102769575A (en) 2012-11-07

Family

ID=47096829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012102815544A Pending CN102769575A (en) 2012-08-08 2012-08-08 Flow load balancing method for intelligent network card

Country Status (1)

Country Link
CN (1) CN102769575A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103677750A (en) * 2013-12-09 2014-03-26 龙芯中科技术有限公司 Method and device for processing thread
CN104539642A (en) * 2014-10-29 2015-04-22 杭州银江智慧医疗集团有限公司 Device and method for hardware acceleration of Internet of things module equipment based on infection control protocol package
WO2015096656A1 (en) * 2013-12-26 2015-07-02 华为技术有限公司 Thread creation method, service request processing method and related device
WO2015113437A1 (en) * 2014-01-29 2015-08-06 华为技术有限公司 Data packet processing method and device based on parallel protocol stack instances
CN106165359A (en) * 2014-10-17 2016-11-23 华为技术有限公司 Data stream distributing method and equipment
CN107135278A (en) * 2017-07-06 2017-09-05 深圳市视维科技股份有限公司 A kind of efficient load equalizer and SiteServer LBS
CN107317759A (en) * 2017-06-13 2017-11-03 国家计算机网络与信息安全管理中心 A kind of thread-level dynamic equalization dispatching method of network interface card
CN107592370A (en) * 2017-10-31 2018-01-16 郑州云海信息技术有限公司 A kind of network load balancing method and device
CN108092913A (en) * 2017-12-27 2018-05-29 杭州迪普科技股份有限公司 A kind of method and the multi-core CPU network equipment of message shunting
CN110300131A (en) * 2018-03-21 2019-10-01 北京金风科创风电设备有限公司 Routing method, device, equipment and system for multiple services of wind power plant
CN113157447A (en) * 2021-04-13 2021-07-23 中南大学 RPC load balancing method based on intelligent network card
WO2022089175A1 (en) * 2020-10-29 2022-05-05 华为技术有限公司 Network congestion control method and apparatus

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540727A (en) * 2009-05-05 2009-09-23 曙光信息产业(北京)有限公司 Hardware shunt method of IP report
CN101699788A (en) * 2009-10-30 2010-04-28 清华大学 Modularized network intrusion detection system
CN102073547A (en) * 2010-12-17 2011-05-25 国家计算机网络与信息安全管理中心 Performance optimizing method for multipath server multi-buffer-zone parallel packet receiving

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540727A (en) * 2009-05-05 2009-09-23 曙光信息产业(北京)有限公司 Hardware shunt method of IP report
CN101699788A (en) * 2009-10-30 2010-04-28 清华大学 Modularized network intrusion detection system
CN102073547A (en) * 2010-12-17 2011-05-25 国家计算机网络与信息安全管理中心 Performance optimizing method for multipath server multi-buffer-zone parallel packet receiving

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103677750A (en) * 2013-12-09 2014-03-26 龙芯中科技术有限公司 Method and device for processing thread
WO2015096656A1 (en) * 2013-12-26 2015-07-02 华为技术有限公司 Thread creation method, service request processing method and related device
US10218820B2 (en) 2014-01-29 2019-02-26 Huawei Technologies Co., Ltd. Method and apparatus for processing data packet based on parallel protocol stack instances
WO2015113437A1 (en) * 2014-01-29 2015-08-06 华为技术有限公司 Data packet processing method and device based on parallel protocol stack instances
US10715589B2 (en) 2014-10-17 2020-07-14 Huawei Technologies Co., Ltd. Data stream distribution method and apparatus
CN106165359B (en) * 2014-10-17 2020-05-08 华为技术有限公司 Data stream distribution method and device
CN106165359A (en) * 2014-10-17 2016-11-23 华为技术有限公司 Data stream distributing method and equipment
CN104539642A (en) * 2014-10-29 2015-04-22 杭州银江智慧医疗集团有限公司 Device and method for hardware acceleration of Internet of things module equipment based on infection control protocol package
CN107317759A (en) * 2017-06-13 2017-11-03 国家计算机网络与信息安全管理中心 A kind of thread-level dynamic equalization dispatching method of network interface card
CN107135278A (en) * 2017-07-06 2017-09-05 深圳市视维科技股份有限公司 A kind of efficient load equalizer and SiteServer LBS
CN107592370A (en) * 2017-10-31 2018-01-16 郑州云海信息技术有限公司 A kind of network load balancing method and device
CN108092913A (en) * 2017-12-27 2018-05-29 杭州迪普科技股份有限公司 A kind of method and the multi-core CPU network equipment of message shunting
CN108092913B (en) * 2017-12-27 2022-01-25 杭州迪普科技股份有限公司 Message distribution method and multi-core CPU network equipment
CN110300131A (en) * 2018-03-21 2019-10-01 北京金风科创风电设备有限公司 Routing method, device, equipment and system for multiple services of wind power plant
CN110300131B (en) * 2018-03-21 2022-02-15 北京金风科创风电设备有限公司 Routing method, device, equipment and system for multiple services of wind power plant
WO2022089175A1 (en) * 2020-10-29 2022-05-05 华为技术有限公司 Network congestion control method and apparatus
CN113157447A (en) * 2021-04-13 2021-07-23 中南大学 RPC load balancing method based on intelligent network card
CN113157447B (en) * 2021-04-13 2023-08-29 中南大学 RPC load balancing method based on intelligent network card

Similar Documents

Publication Publication Date Title
CN102769575A (en) Flow load balancing method for intelligent network card
US10210125B2 (en) Receive queue with stride-based data scattering
CN101540727B (en) Hardware shunt method of IP report
CN103795622B (en) Message forwarding method and device using same
KR100782945B1 (en) Method for managing data stream transport in a network
CN103176780B (en) A kind of multi-network interface binding system and method
CN111176723B (en) Service grid and link version based service multi-version release system and method
US20080273532A1 (en) Direct Assembly Of A Data Payload In An Application Memory
US20180052789A1 (en) Direct Memory Access Transmission Control Method and Apparatus
CN1488105A (en) Method and apparatus forcontrolling flow of data between data processing systems via a memory
US7751401B2 (en) Method and apparatus to provide virtual toe interface with fail-over
CN103733574A (en) Virtualization gateway between virtualized and non-virtualized networks
CN106936916A (en) Data sharing method and device
CN101840328A (en) Data processing method, system and related equipment
US20090006690A1 (en) Providing universal serial bus device virtualization with a schedule merge from multiple virtual machines
CN103916374A (en) Service gated launch method and device
CN102209019B (en) A kind of load-balancing method based on message payload and load-balancing device
CN103200072A (en) Network-based data transmission method, device and system
CN111470228A (en) Goods sorting system, method, device and storage medium
CN103312781A (en) Implementation method of virtual USB (Universal Serial Bus)
CN105119964A (en) File storage via physical block addresses
CN107291638A (en) Parallel processing apparatus and the method for control communication
CN106790767A (en) A kind of method based on the automatic distribution virtual machine IP of VMware virtualizations
CN102075432A (en) Method, device, equipment and system for transmitting and receiving message
CN109189581A (en) A kind of job scheduling method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C53 Correction of patent of invention or patent application
CB03 Change of inventor or designer information

Inventor after: Lu Song

Inventor after: Wang Yanhai

Inventor after: Wang Lidong

Inventor after: Zou Cuan

Inventor after: Zhou Li

Inventor after: Yan Pan

Inventor after: Wang Yong

Inventor after: Zhang Liang

Inventor after: Han Zhiqian

Inventor after: Liu Xin

Inventor before: Zhou Li

Inventor before: Sun Chuanming

Inventor before: Lu Song

Inventor before: Zou Cuan

Inventor before: Wang Lidong

Inventor before: Zhang Xiaoming

Inventor before: Wang Weicheng

Inventor before: Wang Yong

Inventor before: Zhou Qing

Inventor before: Li Jianang

COR Change of bibliographic data

Free format text: CORRECT: INVENTOR; FROM: ZHOU LI LU SONG ZOU XIN WANG LIDONG ZHANG XIAOMING WANG WEISHENG WANG YONGZHOU QING LI JIANANG SUN CHUANMING TO: LU SONG WANG LIDONG ZOU XIN ZHOU LI YAN PAN WANG YONG ZHANG LIANG HAN ZHIQIAN LIU XIN WANG YANHAI

C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C53 Correction of patent of invention or patent application
CB02 Change of applicant information

Address after: Yuhuatai District of Nanjing City, Jiangsu province 210012 Bauhinia Road No. 68

Applicant after: Nanjing Sinovatio Technology LLC

Applicant after: State Computer Network and Information Safety Management Center

Address before: Yuhuatai District of Nanjing City, Jiangsu province 210012 Bauhinia Road No. 68

Applicant before: Nanjing Zhongxing Special Software Co., Ltd.

Applicant before: State Computer Network and Information Safety Management Center

COR Change of bibliographic data

Free format text: CORRECT: APPLICANT; FROM: NANJING ZHONGXING SPECIAL SOFTWARE CO., LTD. TO: NANJING SINOVATIO TECHNOLOGY LLC

CB02 Change of applicant information

Address after: 210012 Yuhuatai, Jiangsu province tulip Road, No. 17, No.

Applicant after: Nanjing Sinovatio Technology LLC

Applicant after: State Computer Network and Information Safety Management Center

Address before: Yuhuatai District of Nanjing City, Jiangsu province 210012 Bauhinia Road No. 68

Applicant before: Nanjing Sinovatio Technology LLC

Applicant before: State Computer Network and Information Safety Management Center

COR Change of bibliographic data
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20121107