CN102650950B - Platform architecture supporting multi-GPU (Graphics Processing Unit) virtualization and work method of platform architecture - Google Patents

Platform architecture supporting multi-GPU (Graphics Processing Unit) virtualization and work method of platform architecture Download PDF

Info

Publication number
CN102650950B
CN102650950B CN201210102989.8A CN201210102989A CN102650950B CN 102650950 B CN102650950 B CN 102650950B CN 201210102989 A CN201210102989 A CN 201210102989A CN 102650950 B CN102650950 B CN 102650950B
Authority
CN
China
Prior art keywords
gpu
module
virtual machine
server
client end
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210102989.8A
Other languages
Chinese (zh)
Other versions
CN102650950A (en
Inventor
袁家斌
吕相文
马业
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201210102989.8A priority Critical patent/CN102650950B/en
Publication of CN102650950A publication Critical patent/CN102650950A/en
Application granted granted Critical
Publication of CN102650950B publication Critical patent/CN102650950B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Computer And Data Communications (AREA)

Abstract

The invention provides a platform architecture supporting multi-GPU (Graphics Processing Unit) virtualization and a work method of the platform architecture. The platform architecture is used as a transmission medium by deploying middleware at a GPU server end and the end part of a virtual machine and using modes such as socket or infiniband to make up the defect that an original virtual machine platform cannot accelerate by using the GPU. The platform architecture can be used for managing the GPU resources through one or more centrally-controlled management nodes, carrying out fine grit division on the GPU resources and providing the function multitasking and execution. The virtual machine requests the GPU resources for the management nodes through the middleware and accelerates by using the GPU resources. The GPU server is used for registering the GPU resources for the management nodes through the middleware and providing service by using the GPU resources. According to the platform architecture disclosed by the invention, the parallel processing capability of the GPU is introduced into the virtual machine; and the utilization rate of the GPU is increased to the maximum extent by combining a management mechanism. According to the platform architecture, energy consumption can be effectively reduced and the calculation efficiency is increased.

Description

A kind of platform architecture and method of work thereof supporting many GPU vitualization
Technical field
The present invention relates to virtual machine in a kind of virtualized environment and utilize platform architecture and the method for work thereof of many GPU speed-up computation in redirected mode, belong to technical field of virtualization.
Background technology
Virtual be cloud computing core technology basis, the advantage such as cost savings, security enhancing that it brings obtains the accreditation of people gradually, is the study hotspot of computer science.Virtual by hardware resource of Intel Virtualization Technology; multiple identical computer hardware platforms can be simulated on one computer; thus multiple operating system can be run simultaneously and realize mutually isolated; improve the utilization ratio of server, have a large amount of application in server merging, network security, calculating data protection, high-performance calculation and the field such as credible.
In recent years, the performance of Graphics Processing Unit (GPU) and function significantly increased.The function of GPU is no longer confined to image procossing, and develop into one has the high processor calculating the highly-parallel of peak value and memory bandwidth simultaneously.Along with some support GPGPU(general graphical processing unit) release of technology (such as CUDA) that calculates, the application of GPGPU is also more and more extensive.Due to the powerful computation capability of GPU, make the heterogeneous schemas introducing CPU+GPU in increasing high performance computation.But the power consumption of GPU is comparatively large on the one hand, if each node is equipped with GPU, then greatly may increase the power consumption of cluster; On the one hand, the computation capability of GPU is powerful, and in most of computing, GPU is as coprocessor, and the parallel section only in speed code, makes the utilization rate of GPU not high; On the other hand due to the closure of GPU, virtual machine cannot directly utilize GPU to carry out speed-up computation.This makes the application of GPU in virtual machine be limited by very large.
Summary of the invention
technical matters
In order to solve the problem that virtualized environment cannot utilize GPU to accelerate, the present invention proposes a kind of many GPU vitualization platform architecture and the method for work thereof that are applicable to cluster, by Management Unit, client component, server-side component collaborative work, make virtual machine can obtain the powerful computation capability of GPU, the acquisition to GPU resource and fine-grained Resourse Distribute can be realized, and by carrying out load balancing to GPU, improve GPU utilization factor, reduce energy consumption.Present invention achieves the lifting utilizing GPU to virtual machine processing power, assembly in virtual machine is called GPU by interception application program, and be redirected to the machine privileged domain or long-range GPU server, the GPU of application program is invoked on privileged domain or long-range GPU server perform, and returns results to virtual machine complete.
technical scheme
The present invention adopts following technical scheme to solve the problems of the technologies described above:
A kind of platform architecture supporting many GPU vitualization, comprise GPU resource administration module, virtual machine client end module, GPU server module, GPU resource administration module is deployed on GPU resource management node, virtual machine client end module is deployed on virtual machine client end, GPU server module is deployed in GPU service end, GPU resource administration module is responsible for the registration of GPU server and the process to GPU resource request, virtual machine client end module and GPU server module carry out data transmission, virtual machine client end module and GPU server module are carried out alternately, virtual machine client end block intercepts virtual machine calling GPU, and be redirected to GPU server module, GPU server module then accepts the GPU recalls information of virtual machine client end block intercepts, call GPU to perform, and return execution result.
The method of work of the platform architecture of the many GPU vitualization of above-mentioned support comprises the steps:
Step 1, starts GPU resource administration module, monitors the registration request of GPU server module and the resource request of virtual machine client end module, and the GPU server maintenance state table to registration.
Step 2, starts GPU server module, sends registration request to GPU resource administration module.
Step 3, GPU resource administration module receives registration request, sets up a table, safeguards the state of current GPU server, return successfully after completing.
Step 4, GPU server module receives the information succeeded in registration, and monitors designated port at once.
Step 5, starts virtual machine client end module, monitors calling GPU.When virtual machine occurs that GPU calls, then send resource request to GPU resource administration module.
Step 6, GPU resource administration module receives resource request, obtains the duty of registered GPU server, according to certain algorithm, to the GPU server that virtual machine client end module assignment is mated most.
Step 7, GPU resource administration module receives the GPU server of distribution, and set up data with the GPU server module of this GPU server and transmit and be connected, virtual machine client end module calls encapsulation by interception, and is sent to GPU server module by data transmission connection.
Step 8, GPU server module receives the data of encapsulation, comprehensive each GPU load information, and be the GPU that its selection is mated most, execution is called, and returns results until perform end.
Step 9, virtual machine client end module receives result and returns to application program until perform end.
beneficial effect
The present invention utilizes existing Intel Virtualization Technology, the parallel processing capability of GPU is introduced virtual machine, and in conjunction with administrative mechanism, by the carrier that conventional socket, infiniband or the special communication mode of each virtual platform transmit as data, existing virtual platform is not modified, only on original platform base, add assembly, make virtual machine can use GPU to calculate, be applicable to all virtual platforms.The present invention easily uses, and user only needs simple setting and configuration, easily transplants, need not revise and can be operated on each virtual platform; Design Orientation of the present invention is in virtual cluster, GPU is utilized to assist to improve virtualized processing power, be applicable to teacher, student etc. and use GPU programming by demonstration technology or other guide at teaching process needs, also be adapted in virtual cluster simultaneously, improve the processing power of virtual machine, and improve the utilization rate of GPU.Use scenes of the present invention is extensive, has good practicality and feasibility.
Accompanying drawing explanation
Fig. 1 is high-level schematic functional block diagram of the present invention;
Fig. 2 is transmission procedure process flow diagram of the present invention;
Fig. 3 is real-time control routine process flow diagram of the present invention.
Embodiment
Below in conjunction with accompanying drawing, specific embodiments of the present invention are described:
As shown in Figure 1, the platform architecture of the many GPU vitualization of support provided by the invention comprises GPU resource administration module, virtual machine client end module, GPU server module.GPU resource administration module is deployed on GPU resource management node, and virtual machine client end module is deployed on virtual machine client end, and GPU server module is deployed in GPU service end (i.e. GPU server).Virtual machine client end is communicated by the exclusive mode of socket, infiniband or virtual platform with GPU service end and GPU resource administration module.GPU resource administration module is responsible for the registration of GPU server and the process to GPU resource request, virtual machine client end module and GPU server module carry out data transmission, virtual machine client end module and GPU server module are carried out alternately, virtual machine client end block intercepts virtual machine calling GPU, and be redirected to GPU server module, GPU server module then accepts the GPU recalls information of virtual machine client end block intercepts, calls GPU and performs, and return execution result.GPU resource administration module is to request place of GPU server module and virtual machine client end module
Reason, the resource registering of response GPU server module, and monitor the load of each GPU server in real time, the task of each GPU server is adjusted, the computational resource of each GPU server is managed, simultaneously according to the request of load response virtual machine client end module, and distribute for it GPU server mated most.Virtual machine client end module and GPU server module, undertaken alternately by socket or infiniband or the special communication mode of each virtual platform, and by interception, the mode that is redirected, the GPU completing virtual machine accelerates.In addition, GPU server module carries out resource management to GPU multiple in GPU server, and each GPU supports that multiple tasks in parallel is run, and GPU server module adds up each GPU current task load, and carries out the load balancing between GPU according to present load.
Be illustrated in figure 2 the workflow diagram of GPU resource management node.Virtual machine client end module is to GPU resource administration module request resource, and GPU server module is to GPU resource administration module registration resource.GPU resource administration module is resolved request, and responds according to current state.
Be illustrated in figure 3 virtual machine client end module and the mutual workflow diagram of GPU server module.Evaluation work completes jointly primarily of virtual machine client end module and GPU server module.After virtual machine client module and GPU server module connect, both sides carry out data transmit-receive transmission.Now, application program the calling for GPU that virtual machine client end block intercepts the machine is run, and send data to GPU server module.The data of GPU server module sink virtual machine client, and resolve, select GPU and utilize this GPU computing to obtain a result, and returning execution result.Virtual machine client end module receives data, and execution result is returned to application program.GPU server module judges whether the data that user side sends are stop transmission command.If not, then repeat said process; If so, then close this to connect.
Support that the method for work of the platform architecture of many GPU vitualization specifically comprises the steps:
Step 1, starts GPU resource administration module, monitors the registration request of GPU server module and the resource request of virtual machine client end module, and the GPU server maintenance state table to registration.
Step 2, starts GPU server module, sends registration request to GPU resource administration module.
Step 3, GPU resource administration module receives registration request, sets up a table, safeguards the state of current GPU server, return successfully after completing.
Step 4, GPU server module receives the information succeeded in registration, and monitors designated port at once.
Step 5, starts virtual machine client end module, monitors calling GPU.When virtual machine occurs that GPU calls, then send resource request to GPU resource administration module.
Step 6, GPU resource administration module receives resource request, obtains the duty of registered GPU server, according to certain algorithm, to the GPU server that virtual machine client end module assignment is mated most.
Step 7, GPU resource administration module receives the GPU server of distribution, and set up data with the GPU server module of this GPU server and transmit and be connected, virtual machine client end module calls encapsulation by interception, and is sent to GPU server module by data transmission connection.
Step 8, GPU server module receives the data of encapsulation, comprehensive each GPU load information, and be the GPU that its selection is mated most, execution is called, and returns results until perform end.
Step 9, virtual machine client end module receives result and returns to application program until perform end.

Claims (3)

1. support the method for work of the platform architecture of many GPU vitualization for one kind, it is characterized in that described platform architecture comprises GPU resource administration module, virtual machine client end module, GPU server module, GPU resource administration module is deployed on GPU resource management node, virtual machine client end module is deployed on virtual machine client end, GPU server module is deployed in GPU service end, GPU resource administration module is responsible for the registration of GPU server and the process to GPU resource request, virtual machine client end module and GPU server module carry out data transmission, virtual machine client end module and GPU server module are carried out alternately, virtual machine client end block intercepts virtual machine calling GPU, and be redirected to GPU server module, GPU server module then accepts the GPU recalls information of virtual machine client end block intercepts, call GPU to perform, and return execution result,
The request of described GPU resource administration module to GPU server module and virtual machine client end module processes, the registration request of response GPU server module, and monitor the load of each GPU server in real time, the task of each GPU server is adjusted, the computational resource of each GPU server is managed, simultaneously according to the request of load response virtual machine client end module, and distribute for it GPU server mated most;
Described method of work comprises following steps:
Step 1, starts GPU resource administration module, monitors the registration request of GPU server module and the resource request of virtual machine client end module, and the GPU server maintenance state table to registration;
Step 2, starts GPU server module, sends registration request to GPU resource administration module;
Step 3, GPU resource administration module receives registration request, sets up a state table, safeguards the duty of current GPU server, return the information succeeded in registration after completing;
Step 4, GPU server module receives the information succeeded in registration, and monitors designated port at once;
Step 5, starts virtual machine client end module, monitors calling GPU, when virtual machine occurs that GPU calls, then sends resource request to GPU resource administration module;
Step 6, GPU resource administration module receives resource request, obtains the duty of registered GPU server, according to certain algorithm, to the GPU server that virtual machine client end module assignment is mated most;
Step 7, GPU resource administration module receives the GPU server of distribution, and set up data with the GPU server module of this GPU server and transmit and be connected, virtual machine client end module calls encapsulation by interception, and is sent to GPU server module by data transmission connection;
Step 8, GPU server module receives the data of encapsulation, comprehensive each GPU load information, and be the GPU that its selection is mated most, execution is called, and returns results until perform end;
Step 9, virtual machine client end module receives result and returns to application program until perform end.
2. method of work according to claim 1, it is characterized in that described virtual machine client end module and GPU server module are undertaken alternately by socket or infiniband or the special communication mode of each virtual platform, by interception, redirected mode, the GPU completing virtual machine accelerates.
3. method of work according to claim 1, it is characterized in that described GPU server module carries out resource management to GPU multiple in GPU server, each GPU supports that multiple tasks in parallel is run, GPU server module adds up each GPU current task load, and carries out the load balancing between GPU according to current task load.
CN201210102989.8A 2012-04-10 2012-04-10 Platform architecture supporting multi-GPU (Graphics Processing Unit) virtualization and work method of platform architecture Active CN102650950B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210102989.8A CN102650950B (en) 2012-04-10 2012-04-10 Platform architecture supporting multi-GPU (Graphics Processing Unit) virtualization and work method of platform architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210102989.8A CN102650950B (en) 2012-04-10 2012-04-10 Platform architecture supporting multi-GPU (Graphics Processing Unit) virtualization and work method of platform architecture

Publications (2)

Publication Number Publication Date
CN102650950A CN102650950A (en) 2012-08-29
CN102650950B true CN102650950B (en) 2015-04-15

Family

ID=46692958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210102989.8A Active CN102650950B (en) 2012-04-10 2012-04-10 Platform architecture supporting multi-GPU (Graphics Processing Unit) virtualization and work method of platform architecture

Country Status (1)

Country Link
CN (1) CN102650950B (en)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103593246B (en) * 2012-08-15 2017-07-11 中国电信股份有限公司 Communication means, host and dummy machine system between virtual machine and host
CN103257885A (en) * 2013-05-20 2013-08-21 深圳市京华科讯科技有限公司 Media virtualization processing method
CN103257884A (en) * 2013-05-20 2013-08-21 深圳市京华科讯科技有限公司 Virtualization processing method for equipment
CN103309748B (en) * 2013-06-19 2015-04-29 上海交通大学 Adaptive scheduling host system and scheduling method of GPU virtual resources in cloud game
CN103324505B (en) * 2013-06-24 2016-12-28 曙光信息产业(北京)有限公司 The method disposing GPU development environment in group system and cloud computing system
CN103713938A (en) * 2013-12-17 2014-04-09 江苏名通信息科技有限公司 Multi-graphics-processing-unit (GPU) cooperative computing method based on Open MP under virtual environment
CN104754464A (en) * 2013-12-31 2015-07-01 华为技术有限公司 Audio playing method, terminal and system
CN105122210B (en) 2013-12-31 2020-02-21 华为技术有限公司 GPU virtualization implementation method and related device and system
US9584594B2 (en) 2014-04-11 2017-02-28 Maxeler Technologies Ltd. Dynamic provisioning of processing resources in a virtualized computational architecture
US10715587B2 (en) 2014-04-11 2020-07-14 Maxeler Technologies Ltd. System and method for load balancing computer resources
US9501325B2 (en) 2014-04-11 2016-11-22 Maxeler Technologies Ltd. System and method for shared utilization of virtualized computing resources
CN105988874B (en) * 2015-02-10 2020-08-28 阿里巴巴集团控股有限公司 Resource processing method and device
WO2016145632A1 (en) * 2015-03-18 2016-09-22 Intel Corporation Apparatus and method for software-agnostic multi-gpu processing
CN106155804A (en) * 2015-04-12 2016-11-23 北京典赞科技有限公司 Method and system to the unified management service of GPU cloud computing resources
CN105159753B (en) 2015-09-25 2018-09-28 华为技术有限公司 The method, apparatus and pooling of resources manager of accelerator virtualization
CN111865657B (en) * 2015-09-28 2022-01-11 华为技术有限公司 Acceleration management node, acceleration node, client and method
GB2545170B (en) * 2015-12-02 2020-01-08 Imagination Tech Ltd GPU virtualisation
CN105528249B (en) * 2015-12-06 2019-04-05 北京天云融创软件技术有限公司 A kind of dispatching method of multiple users share GPU resource
CN105959404A (en) * 2016-06-27 2016-09-21 江苏易乐网络科技有限公司 GPU virtualization platform based on cloud computing
CN108121312B (en) * 2017-11-29 2020-10-30 南瑞集团有限公司 ARV load balancing system and method based on integrated hydropower management and control platform
US10846138B2 (en) 2018-08-23 2020-11-24 Hewlett Packard Enterprise Development Lp Allocating resources of a memory fabric
CN109376011B (en) * 2018-09-26 2021-01-15 郑州云海信息技术有限公司 Method and device for managing resources in virtualization system
CN109388496A (en) * 2018-11-01 2019-02-26 北京视甄智能科技有限公司 A kind of image concurrent processing method, apparatus and system based on more GPU cards
CN109656714B (en) * 2018-12-04 2022-10-28 成都雨云科技有限公司 GPU resource scheduling method of virtualized graphics card
CN109582425B (en) * 2018-12-04 2020-04-14 中山大学 GPU service redirection system and method based on cloud and terminal GPU fusion
CN109598250B (en) * 2018-12-10 2021-06-25 北京旷视科技有限公司 Feature extraction method, device, electronic equipment and computer readable medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101938368A (en) * 2009-06-30 2011-01-05 国际商业机器公司 Virtual machine manager in blade server system and virtual machine processing method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8776050B2 (en) * 2003-08-20 2014-07-08 Oracle International Corporation Distributed virtual machine monitor for managing multiple virtual resources across multiple physical nodes

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101938368A (en) * 2009-06-30 2011-01-05 国际商业机器公司 Virtual machine manager in blade server system and virtual machine processing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Distributed Processing, 2009. IPDPS 2009》.2009,第1-11页. *
Lin Shi等.vCUDA: GPU accelerated high performance computing in virtual machines.《IEEE International Symposium on Parallel &amp *

Also Published As

Publication number Publication date
CN102650950A (en) 2012-08-29

Similar Documents

Publication Publication Date Title
CN102650950B (en) Platform architecture supporting multi-GPU (Graphics Processing Unit) virtualization and work method of platform architecture
CN102929718B (en) Distributed GPU (graphics processing unit) computer system based on task scheduling
CN105912389B (en) The virtual machine (vm) migration system under mixing cloud environment is realized based on data virtualization
CN202565304U (en) Distributed computing task scheduling and execution system
CN102479100A (en) Pervasive computing environment virtual machine platform and creation method thereof
CN102271145A (en) Virtual computer cluster and enforcement method thereof
CN104615480A (en) Virtual processor scheduling method based on NUMA high-performance network processor loads
CN101860024B (en) Implementation method for integrating provincial dispatch organization PAS system in electric power system
CN104023062A (en) Heterogeneous computing-oriented hardware architecture of distributed big data system
Bala et al. Offloading in cloud and fog hybrid infrastructure using iFogSim
Montella et al. Virtualizing high-end GPGPUs on ARM clusters for the next generation of high performance cloud computing
CN103501295B (en) A kind of remote access method based on virtual machine (vm) migration and equipment
CN103685564A (en) Plug-in application ability layer introduced industry application online operation cloud platform architecture
CN102137162B (en) CAD (Computer Aided Design) integrated system based on mode of software used as service
CN105959404A (en) GPU virtualization platform based on cloud computing
Whaiduzzaman et al. Pefc: Performance enhancement framework for cloudlet in mobile cloud computing
Varghese et al. Acceleration-as-a-service: Exploiting virtualised GPUs for a financial application
Sarddar et al. Central controller framework for mobile cloud computing
CN104166581A (en) Virtualization method for increment manufacturing device
CN103747439A (en) Wireless controller equipment, wireless authentication processing method, system and networking technique
Chinenyeze et al. An aspect oriented model for software energy efficiency in decentralised servers
Liu et al. BSPCloud: A hybrid distributed-memory and shared-memory programming model
CN104699520B (en) A kind of power-economizing method based on virtual machine (vm) migration scheduling
CN104899094A (en) Virtual machine load balance processing method
Li et al. Reliability evaluation for cloud computing system considering common cause failure

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20120829

Assignee: Jiangsu Wisedu Information Technology Co., Ltd.

Assignor: Nanjing University of Aeronautics and Astronautics

Contract record no.: 2013320000314

Denomination of invention: Platform architecture supporting multi-GPU (Graphics Processing Unit) virtualization and work method of platform architecture

License type: Exclusive License

Record date: 20130410

LICC Enforcement, change and cancellation of record of contracts on the licence for exploitation of a patent or utility model
C14 Grant of patent or utility model
GR01 Patent grant
EM01 Change of recordation of patent licensing contract

Change date: 20150421

Contract record no.: 2013320000314

Assignee after: JIANGSU WISEDU EDUCATION INFORMATION TECHNOLOGY CO., LTD.

Assignee before: Jiangsu Wisedu Information Technology Co., Ltd.

LICC Enforcement, change and cancellation of record of contracts on the licence for exploitation of a patent or utility model
EC01 Cancellation of recordation of patent licensing contract

Assignee: JIANGSU WISEDU EDUCATION INFORMATION TECHNOLOGY CO., LTD.

Assignor: Nanjing University of Aeronautics and Astronautics

Contract record no.: 2013320000314

Date of cancellation: 20150430

LICC Enforcement, change and cancellation of record of contracts on the licence for exploitation of a patent or utility model