CN114328366A - Method, device and storage medium for realizing serialization and deserialization nested data - Google Patents

Method, device and storage medium for realizing serialization and deserialization nested data Download PDF

Info

Publication number
CN114328366A
CN114328366A CN202011043849.9A CN202011043849A CN114328366A CN 114328366 A CN114328366 A CN 114328366A CN 202011043849 A CN202011043849 A CN 202011043849A CN 114328366 A CN114328366 A CN 114328366A
Authority
CN
China
Prior art keywords
identifier
layer
data
serialization
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011043849.9A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cambricon Technologies Corp Ltd
Original Assignee
Cambricon Technologies Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cambricon Technologies Corp Ltd filed Critical Cambricon Technologies Corp Ltd
Priority to CN202011043849.9A priority Critical patent/CN114328366A/en
Priority to PCT/CN2021/102073 priority patent/WO2022062510A1/en
Priority to US18/003,689 priority patent/US20230244380A1/en
Publication of CN114328366A publication Critical patent/CN114328366A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure relates to methods, devices and readable storage media for implementing serialized and deserialized nested data, wherein the system-on-chip of the present disclosure is included in an integrated circuit device that includes a universal interconnect interface and other processing devices. The computing device interacts with other processing devices to jointly complete computing operations specified by the user. The integrated circuit device may further include a storage device, which is connected to the computing device and the other processing device, respectively, for data storage of the computing device and the other processing device.

Description

Method, device and storage medium for realizing serialization and deserialization nested data
Technical Field
The present disclosure relates generally to the field of computers. More particularly, the present disclosure relates to a method, system, integrated circuit device, board card, and computer-readable storage medium for implementing serialized and deserialized nested data.
Background
Live Migration (Live Migration), also called Live Migration or Live Migration, is to completely SAVE the running state of a virtual machine through a SAVE/restore (LOAD) program, and migrate the running state of the virtual machine from one physical server to another physical server. After recovery, the virtual machine is still running smoothly and the user does not perceive any differences.
In the field of artificial intelligence, due to the high complexity of Application Specific Integrated Circuits (ASICs), thermophoresis cannot be fully implemented. Especially, in the process of live migration, how to serialize information by a source server and how to deserialize information by a destination server are problems to be solved in the prior art.
Disclosure of Invention
To at least partially solve the technical problems mentioned in the background, the disclosed solution provides a method, a system, an integrated circuit device, a board card and a computer readable storage medium for implementing serialization and deserialization nested data.
According to an aspect of the present disclosure, there is provided a system for serializing nested data including at least a first layer structure and a second layer structure, the system comprising: memory and serializing means. The memory is used for storing the nesting data. The serialization device is used for responding to a hot migration starting request to generate information to be migrated, and the data structure of the information to be migrated comprises: a data structure layer including a first symbolic identifier for recording a name of the first layer structure; and a serialization layer including a second symbolic identifier to document a name of the second layer structure.
According to another aspect of the present disclosure, there is provided a system for deserializing nested data, the nested data including at least a first tier structure and a second tier structure, the system comprising: deserializer and memory. The deserializing device is used for: receiving information to be migrated, wherein a data structure of the information to be migrated comprises: a data structure layer including a first symbol identifier; and a serialization layer comprising a second symbol identifier; retrieving first serialized data based on the first symbol identifier; retrieving second serialized data based on the second symbol identifier; reducing the first serialized data to the first layer structure; and reducing the second serialized data into the second layer structure. The memory is used for storing the first layer structure body and the second layer structure body.
According to another aspect of the present disclosure, there is provided an integrated circuit device including the system of any one of the preceding claims, and a board including the integrated circuit device.
According to another aspect of the present disclosure, there is provided a method of serializing nested data including at least a first layer structure and a second layer structure, the method comprising: responding to a hot migration starting request, and generating information to be migrated, wherein the step of generating the information to be migrated comprises the following steps: generating a first symbol identifier in the data structure layer of the information to be migrated, wherein the first symbol identifier is used for recording the name of the first layer structure body; and generating a second symbol identifier in the serialization layer of the information to be migrated so as to record the name of the second layer structure.
According to another aspect of the present disclosure, there is provided a method of deserializing nested data, the nested data including at least a first tier structure and a second tier structure, the method comprising: receiving information to be migrated, wherein a data structure of the information to be migrated comprises: a data structure layer including a first symbol identifier; and a serialization layer comprising a second symbol identifier; retrieving first serialized data based on the first symbol identifier; retrieving second serialized data based on the second symbol identifier; reducing the first serialized data to the first layer structure; reducing the second serialized data to the second layer structure; and storing the first layer structure and the second layer structure.
According to another aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program code for serializing or deserializing nested data, which when executed by a processor, performs the foregoing method.
The method and the system can realize serialization of the information on the source server and deserialization of the information on the destination server, and achieve the technical effect of the heat transfer.
Drawings
The description of the exemplary embodiments of the present disclosure as well as other objects, features and advantages thereof will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 is a schematic diagram illustrating an artificial intelligence chip framework of an embodiment of the disclosure;
FIG. 2 is a schematic diagram illustrating the internal structure of a computing device according to an embodiment of the disclosure;
FIG. 3 is a flow diagram of migrating a save path according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating migration saving paths performed on the source server side according to an embodiment of the present disclosure;
FIG. 5 is a diagram illustrating a data structure of information to be migrated;
FIG. 6 is a schematic diagram illustrating a structure in a data structure layer according to an embodiment of the disclosure;
FIG. 7 is a diagram illustrating a data structure generated when nested data is serialized;
FIG. 8 is a flow diagram illustrating the generation of a data structure for information to be migrated;
FIG. 9 is a flowchart illustrating the generation of a data structure for information to be migrated;
FIG. 10 is a flow chart illustrating the generation of information to be migrated by an embodiment of the present disclosure;
FIG. 11 is a flow diagram illustrating migrating a restoration path according to an embodiment of the present disclosure;
fig. 12 is a schematic diagram illustrating a migration recovery path on the destination server side according to an embodiment of the present disclosure;
FIG. 13 is a flow diagram illustrating a deserializer implementing a live migration restoration path of an embodiment of the present disclosure;
FIG. 14 is a flow diagram illustrating a deserializer deserialization protocol layer of an embodiment of the present disclosure;
FIG. 15 is a flow chart illustrating deserialization configuration information of an embodiment of the present disclosure;
FIG. 16 is a flow chart illustrating deserialization of data information according to an embodiment of the present disclosure;
FIG. 17 is a flow diagram illustrating identifying or retrieving information for a serialization layer of an embodiment of the present disclosure;
FIG. 18 is a flow diagram illustrating deserializing nested data of an embodiment of the present disclosure;
FIG. 19 is a schematic diagram illustrating nested data of an embodiment of the present disclosure;
FIG. 20 is a flow chart illustrating identifying or retrieving information for a second tier structure in accordance with an embodiment of the present disclosure;
FIG. 21 is a block diagram illustrating an integrated circuit device of an embodiment of the disclosure; and
fig. 22 is a schematic diagram illustrating a board card according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure. The described embodiments are only a subset of the embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be understood that the terms "first," "second," "third," and "fourth," etc. in the claims, description, and drawings of the present disclosure are used to distinguish between different objects and are not used to describe a particular order. The terms "comprises" and "comprising," when used in the specification and claims of this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the disclosure herein is for the purpose of describing particular embodiments only, and is not intended to be limiting of the disclosure. As used in the specification and claims of this disclosure, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the specification and claims of this disclosure refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
As used in this specification and claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection".
Specific embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
The present disclosure relates to a framework employing virtualization techniques for application on an application specific integrated circuit, such as a machine learning device for neural networks, which may be a convolutional neural network accelerator. The following will be illustrated with an artificial intelligence chip.
FIG. 1 is a block diagram of an artificial intelligence chip virtualization, the block diagram 100 including a user space 102, a kernel space 104, and a system-on-chip 106, separated by dashed lines. The user space 102 is an operating space of a user program, and only simple operations are performed, and system resources cannot be directly called, and an instruction can be issued to the kernel space 104 only through a system interface. The kernel space 104 is a space where kernel code runs, and can execute any command and call all resources of the system. The system-on-chip 106 is a module of an artificial intelligence chip that cooperates with the user space 102 through the kernel space 104.
In this embodiment, the hardware of the user space 102 is collectively referred to as a device or apparatus, and the hardware of the system-on-chip 106 is collectively referred to as a device or unit, for distinction. Such an arrangement is merely for the purpose of more clearly describing the technology of this embodiment, and does not set any limit to the technology of the present disclosure.
This embodiment is illustrated with one physical part virtualized into four virtual parts unless otherwise emphasized, but the present disclosure does not limit the number of virtual parts.
Before the virtualization is not run, the user space 102 is controlled by the hardware monitor tool 108 to obtain information of the system-on-chip 106 through a call interface. The hardware monitor tool 108 may not only collect information of the soc 106, but also obtain overhead of upper layer software to resources of the soc 106 in real time, and grasp detailed information and status of the current soc 106 for a user in real time, where the detailed information and status may be: the hardware device model, the firmware version number, the drive version number, the device utilization rate, the overhead state of the storage device, the board power consumption and the board peak power consumption, the peripheral component interconnect express (PCIe), and the like. The content and amount of information monitored may vary based on the version and usage scenario of the hardware monitor tool 108.
After the system starts virtualization, the operation of the user space 102 is instead taken over by the user virtual machine 110, the user virtual machine 110 is an abstraction and a simulation of the real computing environment, and the system allocates a set of data structures to manage the state of the user virtual machine 110, where the data structures include a complete set of registers, the use of physical memory, the state of virtual devices, and so on. The physical space of the user space 102 in this embodiment is virtualized into four virtual spaces 112, 114, 116, and 118, the four virtual spaces 112, 114, 116, and 118 are independent from each other, and can respectively carry different guest operating systems, such as guest operating system 1, guest operating system 2, guest operating system 3, and guest operating system 4 shown in the figure, where the guest operating systems may be Windows, Linux, Unix, iOS, android, and the like, and each guest operating system runs different application programs.
In the context of the present disclosure, user virtual machine 110 is implemented with a Quick Emulator (QEMU). QEMU is an open source virtualization software written in C language that virtualizes the interfaces through dynamic binary translation and provides a series of hardware models that allow guest operating systems 1, 2, 3, 4 to think they are accessing the system-on-chip 106 directly. The user space 102 includes processors, memory, I/O devices, etc., and the QEMU may virtualize the processors of the user space 102 into four virtual processors, and memory into four virtual memories, as well as virtualize the I/O devices into four virtual I/O devices. Each guest operating system occupies a portion of the user space 102, e.g., one-fourth, i.e., has access to a virtual processor, a virtual memory, and a virtual I/O device, respectively, to perform the tasks of the guest operating system. In this mode, the guest operating systems 1, 2, 3 and 4 can operate independently.
Kernel space 104 carries kernel virtual machine 120 and chip driver 122. The kernel virtual machine 120, in conjunction with the QEMU, is primarily responsible for virtualizing the kernel space 104 and the system-on-chip 106, so that each guest operating system can obtain its own address space when accessing the system-on-chip 106. In more detail, the space on the system-on-chip 106 that maps to the guest operating system is actually a virtual component that maps to this process.
The kernel virtual machine 120 includes a physical function driver, which is a driver specially managing the global function of the SR-IOV device and generally needs to have higher authority than that of a general virtual machine to operate the SR-IOV device. The physical function drivers contain the functionality of all conventional drivers so that the user space 102 has access to the I/O resources of the system-on-chip 106.
From the perspective of the user virtual machine 110, during the running of the virtual machine, the QEMU performs kernel setting through a system call interface provided by the kernel virtual machine 120, and the QEMU uses the virtualization function of the kernel virtual machine 120 to provide hardware virtualization acceleration for the own virtual machine so as to improve the performance of the virtual machine. From the perspective of kernel virtual machine 120, when a user cannot directly interact with kernel space 104, a management tool via user space 102 is required, and therefore a tool operating in user space 102 via QEMU is required.
The chip driver 122 is used to drive a Physical Function (PF) 126, and during the operation of the virtual machine, the user space 102 does not access the on-chip system 106 through the chip driver 122 by the hardware monitor tool 108, so that the guest os 1, the guest os 2, the guest os 3 and the guest os 4 are respectively configured with a user kernel space 124 for loading the chip driver 122, so that each guest os can still drive the on-chip system 106.
The system-on-chip 106 performs virtualization through a single root I/O virtualization (SR-IOV) technique, and in more detail, in the environment of the present disclosure, the SR-IOV technique is implemented by combining software and hardware, so that each component of the system-on-chip 106 is virtualized. SR-IOV technology is a virtualization solution that allows efficient sharing of PCIe resources between virtual machines, making a single PCIe resource shareable by multiple virtual components of the system-on-chip 106, providing dedicated resources for these virtual components. Thus, each virtual component has its own corresponding uniquely accessible resource.
The system-on-chip 106 of this embodiment includes hardware and firmware. The hardware includes read only memory devices ROM (not shown) for storing firmware including physical functions 126 for supporting or coordinating PCIe functions of the SR-IOV, the physical functions 126 having the authority to fully configure PCIe resources. In implementing the SR-IOV technique, the physical functions 126 virtualize a plurality of Virtual Functions (VFs) 128, in this embodiment four virtual functions 128. The virtual function 128 is a lightweight PCIe function, managed by the physical function 126, that may share PCIe physical resources with the physical function 126 and with other virtual functions 128 associated with the same physical function 126. The virtual function 128 only allows the controlling physical function 126 to configure its resources.
Once SR-IOV is enabled in a physical function 126, each virtual function 128 may access its own PCIe configuration space through its own bus, device and function number. Each virtual function 128 has a memory space for mapping its register set. The virtual function 128 driver operates on the register set to enable its functionality and is directly assigned to the corresponding user virtual machine 110. Although virtual, the user virtual machine 110 is made to consider the PCIe device as actually existing.
The hardware of the system-on-chip 106 also includes a computing device 130, a video codec device 132, a JPEG codec device 134, a storage device 136, and PCIe 138. In this embodiment, the computing device 130 is an Intelligent Processing Unit (IPU) for performing convolution calculation of the neural network; the video codec device 132 is used for coding and decoding video data; the JPEG codec device 134 is used for encoding and decoding a still picture using the JPEG algorithm; the memory device 136 may be a Dynamic Random Access Memory (DRAM) for storing data; PCIe 138 is the aforementioned PCIe, during the virtual machine operation, PCIe 138 is virtualized into four virtual interfaces 140, and virtual functions 128 and virtual interfaces 140 are in a one-to-one correspondence, that is, a first virtual function interfaces to a first virtual interface, a second virtual function interfaces to a second virtual interface, and so on.
With SR-IOV technology, computing device 130 is virtualized as four virtual computing devices 142, video codec device 132 is virtualized as four virtual video codec devices 144, JPEG codec device 134 is virtualized as four virtual JPEG codec devices 146, storage device 136 is virtualized as four virtual storage devices 148.
Each guest operating system is configured with a set of virtual suites, each set of virtual suites including a user virtual machine 110, a virtual interface 140, a virtual function 128, a virtual computing device 142, a virtual video codec device 144, a virtual JPEG codec device 146, and a virtual storage device 148. Each set of virtual suites runs independently without affecting each other, and is used to execute the tasks delivered by the corresponding guest operating systems, so as to ensure that each guest operating system can access the configured virtual computing device 142, virtual video codec device 144, virtual JPEG codec device 146 and virtual storage device 148 through the configured virtual interface 140 and virtual function 128.
In more detail, each guest operating system responds to different tasks when executing the tasks, and hardware required to be accessed may also be different, for example: if a task is to perform a matrix convolution calculation, the guest operating system accesses the configured virtual compute device 142 through the configured virtual interface 140 and virtual function 128; if a task is video codec, the guest operating system accesses the configured virtual video codec device 144 through the configured virtual interface 140 and virtual function 128; if a task is JPEG encoding and decoding, the guest OS accesses the configured virtual JPEG codec device 146 via the configured virtual interface 140 and virtual function 128; if a task is to read or write data, the guest operating system accesses the configured virtual storage device 148 through the configured virtual interface 140 and virtual function 128.
FIG. 2 illustrates an internal schematic of a multi-core computing device 130. The computing device 130 has sixteen processing unit cores (processing unit core 0 to processing unit core 15) in total for executing the matrix computing task, and each four processing unit cores form a processing unit group, i.e., a cluster (cluster). In more detail, processing unit core 0 through processing unit core 3 form a first cluster 202, processing unit core 4 through processing unit core 7 form a second cluster 204, processing unit core 8 through processing unit core 11 form a third cluster 206, and processing unit core 12 through processing unit core 15 form a fourth cluster 208. The computing device 130 basically performs computing tasks in units of clusters.
Computing device 130 also includes a memory unit core 210 and a shared memory unit 212. The memory cell core 210 is mainly used for controlling data exchange, and is used as a channel for the computing device 130 to communicate with the storage device 136. The shared memory unit 212 is used for temporarily storing the calculated intermediate values of the clusters 202, 204, 206, 208. During the virtualization operation, the memory unit core 210 will be split into four virtual memory unit cores, and the shared memory unit 212 will also be split into four virtual shared memory units.
Each virtual compute device 142 is configured with a virtual storage unit core, a virtual shared storage unit, and a cluster, respectively, to support the tasks of a particular guest operating system. Similarly, each of the virtual computing devices 142 operates independently and does not affect each other.
The number of clusters of computing device 130 should be at least the same as the number of virtual computing devices 142 to ensure that each virtual computing device 142 can configure one cluster, and when the number of clusters is greater than the number of virtual computing devices 142, the clusters can be appropriately configured to the virtual computing devices 142 according to actual needs to increase flexibility of hardware configuration.
The video codec device 132 of this embodiment includes six video codec units. The video codec device 132 can flexibly allocate the video codec units according to the number of virtual components and the required resources. For example: the video codec device 132 is virtualized into four virtual video codec devices 144, and assuming that the first virtual video codec device and the second virtual video codec device require more video codec resources, two video codec units may be respectively configured for the first virtual video codec device and the second virtual video codec device, and one video codec unit may be respectively configured for the other virtual video codec devices 144. Another example is: the video codec device 132 is virtualized into three virtual video codec devices 144, and under the condition that any one of the virtual video codec devices does not need more video codec resources, two video codec units can be respectively configured for each virtual video codec device 144.
The number of the video codec units should be at least the same as the number of the virtual video codec devices 144, so as to ensure that each virtual video codec device 144 can configure one video codec unit, and when the number of the video codec units is greater than the number of the virtual video codec devices 144, the video codec units can be properly configured to the virtual video codec devices 144 according to actual requirements, so as to increase flexibility of hardware configuration.
Likewise, the JPEG encoding and decoding device 134 of this embodiment comprises six JPEG encoding and decoding units. The JPEG codec device 134 can flexibly allocate JPEG codec units according to the number of virtual components and the required resources, and the allocation method is the same as that of the video codec device 132, so that the details are not repeated.
The storage device 136 may adopt a non-uniform memory access (NUMA) architecture, and includes a plurality of DDR channels, and the storage device 136 may flexibly allocate the DDR channels according to the number of virtual components and required resources, and the allocation manner of the DDR channels is the same as that of the computing device 130, the video codec device 132, and the JPEG codec device 134, and thus is not described herein again.
An application scenario of the present disclosure is a cloud-side data center. The data center needs maintenance work to ensure the stability and the fluency of the whole system, and the maintenance work involves computer sharing, database backup, troubleshooting, uneven resource distribution (such as heavy load and light load), daily maintenance and the like. While the data center performs the aforementioned maintenance work, it must ensure the normal operation of the system, so that the user does not perceive any difference. The present disclosure is based on the architectures of fig. 1 and fig. 2, and implements a live migration technique, which completely saves the running state of the entire virtual machine, and quickly restores the running state to the original hardware platform, even to different hardware platforms. After recovery, the virtual machine is still running smoothly.
Based on the foregoing exemplary framework, the thermomigration scheme of the present disclosure is divided into two stages: the first stage is to pack the configuration and data on the source server and send them to the destination server, i.e. to migrate the storage path; the second phase is to place these configurations and data at the corresponding place of the destination server, i.e. to migrate the restoration path. The live migration scheme is that the running state and data of the whole virtual machine are completely stored and then quickly restored to the original hardware platform or even different hardware platforms. Whether on the same platform or not, the source server and the destination server have the architectures shown in fig. 1 and fig. 2, and the hardware, software and firmware version of the destination server needs to be equal to or higher than that of the source server, so as to ensure that the destination server can correctly identify information when migration and recovery are performed. The two stages of the thermomigration protocol will be described separately.
Fig. 3 is a flowchart illustrating another embodiment of the present disclosure, which is a flowchart illustrating migration of a saving path, where an origin server of the embodiment may be the system disclosed in fig. 1, and fig. 4 is a schematic diagram illustrating migration of a saving path performed by an origin server having the architecture of fig. 1. This embodiment packs the drivers, firmware and hardware information, context information and their state information associated with the particular virtual hardware on the system-on-chip 106 while the user space 102 is still running, which may include state information of the drivers of the virtual functions, firmware and hardware state information, state machines, registers, context state information of the internal states of the hardware, state machines of the software, variables, context state information at constant runtime, etc., to be sent from the origin server.
In step 301, the virtualization management software initiates a migration request to the emulated virtual machine QEMU 402. The virtualization management software of this embodiment is Libvirt 401, which Libvirt 401 is an open source Application Programming Interface (API), daemon, and management tool for managing the virtualization platform that may be used to manage the virtualization technology of QEMU 402. When the system on chip 106 needs to be maintained as described above, the Libvirt 401 starts a hot migration to ensure normal operation of the virtual machine service.
In step 302, QEMU 402 notifies physical function driver 403 to initiate the migration, i.e., QEMU 402 initiates a warm migration initiation request. This embodiment provides a model to manage the entire migration save path process, which is the virtual machine learning unit QEMU object model (VMLU QOM), where the virtual machine learning unit refers to the virtualized artificial intelligence system-on-chip 106 shown in FIG. 1, and the QEMU object model is a simulated PCIe.
More specifically, the VMLU QOM 404 functions to add a virtual PCIe device to the QEMU 402, register it as a QEMU object model in the QEMU 402, indicate to the QEMU 402 that it is hot-migration capable, and provide hot-migration related scheduling routine (dispatch routine) functions, so that the QEMU 402 can smoothly schedule during hot-migration. In this step, the QEMU 402 operates the physical function driver 403 through the scheduler routine function to notify and control the physical function driver 403 to coordinate the unrolling of the live migration.
As for the interaction between the user space 102 and the physical function driver 403, it is realized through Memory mapping I/O (MMIO) of the VMLU QOM 404. Memory mapped I/O is part of the PCI specification, with I/O devices placed in memory space instead of I/O space. From the processor perspective of user space 102, the system accesses other devices after memory mapped I/O as well as accessing memory, simplifying the programming difficulty and interface complexity.
In this step, the VMLU QOM 404 in QEMU 402 initializes a warm migration initiation request and sends the warm migration initiation request to the physical function driver 403.
In step 303, the physical function driver 403 notifies the virtual function driver 405 to initiate migration. The virtual function driver 405 is stored in the virtual machine kernel space, and for the user space 102, there is a readable and writable memory mapped I/O space based on what it sees in the live migration process is the aforementioned virtual PCIe device, and the read and write operations of the virtual function driver 405 to the memory mapped I/O space (i.e., the system on chip 106) are captured and managed by the VMLU QOM 404. For read operations, the VMLU QOM 404 may return the value that should be returned as needed by the virtual function driver 405, allowing synchronization between the virtual function driver 405 and the physical function driver 403.
The VMLU QOM 404 obtains the migration status of the physical function driver 403 by calling the interface of the physical function driver 403. When the virtual function driver 405 wants to read the memory mapped I/O space of the VMLU QOM 404, the VMLU QOM 404 returns the state of the physical function 406 to the virtual function driver 405, in which step the VMLU QOM 404 passes the state of the physical function driver 403 ready for a warm migration to the virtual function driver 405.
In step 304, the virtual function driver 405 suspends execution of tasks from the user space 102. This embodiment takes the form that the virtual function driver 405 does not return control of the processor to the application program in user space 102, and the guest operating system continues to wait without issuing the next task to the virtual function driver 405 to suspend execution of the task in user space 102.
In step 305, the virtual function driver 405 notifies the physical function driver 403 that migration is ready. After suspending execution of instructions from the user space 102, the virtual function driver 405 notifies the physical function driver 403 that the user space 102 is ready, without instruction issue interference during the warm migration.
In step 306, the physical function driver 403 notifies the physical function 406 that it is ready to migrate. The physical function driver 403 sends a warm migration initiation request to the physical function 406 that specifies that the particular virtual hardware 408 be subject to a warm migration. The specific virtual hardware 408 is one of a plurality of virtual hardware of the system on chip 106, and for convenience of illustration, it is assumed herein that the live migration initiation request is directed to the specific virtual function 407 and the corresponding specific virtual hardware 408.
The specific virtual hardware 408 can be a specific virtual computing device, such as the virtual computing device 142 in fig. 1, and the information to be migrated includes the configuration of the virtual computing device 142, the computing intermediate values stored in the virtual shared memory, and the data stored in the virtual memory core. The specific virtual hardware 408 can also be the specific virtual storage device 148 of fig. 1, and the information to be migrated includes the data stored in the specific virtual storage device 148. The specific virtual hardware 408 can also be the virtual video codec device 144 or the virtual JPEG codec device 146, and the information to be migrated includes the configuration of the virtual video codec device 144 or the virtual JPEG codec device 146 and the corresponding codec information.
In step 307, the physical function 406 uploads to the physical function driver 403 data including drivers for the specific virtual function 407, firmware, and information for the specific virtual hardware 408, context information, and state information thereof. First, the physical function 406 sends an instruction to the physical function driver 403 of the kernel space 104, which records information about the specific virtual hardware 408, so that the physical function driver 403 knows how much data needs to be migrated. At this point, VMLU QOM 404 is in stop-copy (stop and copy) phase and does not allocate physical resources to user space 102, and user space 102 naturally has no time slice to run programs, thereby interrupting the connection between user space 102 and specific virtual function 407, but other virtual functions and their corresponding virtual hardware are running as usual. After idling the specific virtual function 407, the physical function 406 batch fetches the information to be migrated from the specific virtual hardware 408, and sends the information to the physical function driver 403. After the information to be migrated is sent, the physical function 406 sends an end signal to the physical function driver 403.
In step 308, the VMLU QOM 404 obtains the information to be migrated from the physical function driver 403. The physical function driver 403 of the kernel space sends the information to be migrated to the VMLU QOM 404.
In step 309, the VMLU QOM 404 embeds the information to be migrated in the instruction to be migrated, and transmits it to the Libvirt 401.
In step 310, after the instruction to be migrated is sent, the physical function 406 releases the resources of the specific virtual hardware 408 and the specific virtual function 407, the VMLU QOM 404 sends an end signal to the virtual function driver 405, the virtual function driver 405 sends a control signal to the interface 409 of the virtual function driver, and the guest os restarts to issue the task. And ending the whole migration saving path.
In more detail, the system of fig. 4 further includes a serialization device 410, which is used to serialize data such as the driver, firmware of the specific virtual function 407 and information, context information and status information thereof of the specific virtual hardware 408 in response to the live migration initiation request in step 307, so as to generate information to be migrated, and upload the information to the physical function driver 403. The serialization apparatus 410 of this embodiment may be implemented using hardware or firmware. Being hardware, the serialization apparatus 410 is configured in the system-on-chip 106; if firmware, it is stored in a read-only memory device of the system-on-chip 106.
In order to ensure that the destination server can successfully complete the migration recovery path, the information to be migrated generated in step 307 must follow the protocol, the source server generates the information to be migrated based on the protocol, and the destination server interprets the information to be migrated according to the protocol, so as to correctly recover the configuration and data. In order to fully describe the state and data of the specific virtual function 407 and the specific hardware 408, the data structure of the information to be migrated of the protocol specification of this embodiment is shown in fig. 5, and the serialization device 410 generates a three-layer framework under this protocol: a protocol layer 51, a data structure layer 52 and a serialization layer 53.
The protocol layer 51 is used to record the protocol version of the information to be migrated, the attribution and length of the data, and other information. In this embodiment, the serialization apparatus 410 generates 10 identifiers at the protocol layer 51, which are a magic number identifier 501, a version identifier 502, a request response identifier 503, a command identifier 504, a sequence number identifier 505, a data source identifier 506, a byte identifier 507, a domain identifier 508, a reserved identifier 509, and a payload identifier 510, respectively. The role of these identifiers is illustratively described below.
The magic number identifier 501 is set to 4 bytes and is used to mark the beginning of the information to be migrated, more specifically, the character element of the magic number identifier 501 is fixed, when the destination server receives a certain command, it can know that the character element is the information to be migrated as long as the character element in the magic number identifier 501 is recognized, and then the operation of the migration recovery path is started.
The version identifier 502 is set to 2 bytes to mark the version of the information to be migrated. As described above, if the system version of the source server is not consistent with the system version of the destination server, and particularly if the system version of the source server is higher than the system version of the destination server, a problem of compatibility may occur, and in order to make the destination server determine the compatibility, the protocol layer 51 uses the version identifier 502 to record the version of the information to be migrated, that is, record the system version of the source server.
The request response identifier 503 is set to 1 byte to indicate that the instruction is a request or a response.
The command identifier 504 is set to 1 byte to indicate the task type of the information to be migrated, and the task type of this embodiment includes the migration status/data and the update data dictionary. The migration status and data are described before and will not be described in detail. The data dictionary refers to definition and description of data items, data structures, data streams, data stores, processing logic, and the like of data, and aims to make detailed description on each element of the data. In short, a data dictionary is a collection of information describing data, a collection of definitions for all data elements used in a system. An update data dictionary is defined and described by the data items, data structures, data flows, data stores, processing logic, etc. of the update data.
The sequence identifier 505 is set to 4 bytes, and is used to record a serial number of the information to be migrated, where the serial number corresponds to a sequence between the information to be migrated.
The data source identifier 506 is set to 2 bytes, and is used to describe which device the information in the information to be migrated comes from, i.e. the specific hardware 408 in fig. 4, i.e. at least one of the virtual computing device, the virtual video codec device, the virtual JPEG codec device and the virtual storage device corresponding to the specific virtual function 407 to be subjected to the live migration.
The byte identifier 507 is set to 8 bytes to record the total number of bytes of information to be migrated or the total number of bytes of payload.
The domain identifier 508 is used to mark the specific virtual function to be migrated, i.e., the specific virtual function 407 in FIG. 4.
The reserved identifier 509 is set to 2 bytes, and is reserved for use when other information needs to be described later.
The payload identifier 510 is used to record information of the data structure layer 52. The data structure layer 52 is used to represent the organizational structure of the information to be migrated. For live migration, it is generally not necessary to describe the data topology and the association between the data structures in too much detail, because the source server and the destination server have similar or even identical frameworks, so the data structure layer 52 of this embodiment does not need to describe much information, as long as the destination server has enough information to understand the information to be migrated of the source server.
The information to be migrated in the present disclosure is divided into two types, one is configuration, and the other is data.
When the information to be migrated is the configuration, in this embodiment, the protocol framework generated by the serialization apparatus 410 in the data structure layer 52 is shown as the configuration framework 54, and includes a generation domain identifier 511, a chip identifier 512, a board identifier 513, a microcontroller identifier 514, a firmware identifier 515, a host driver identifier 516, a virtual machine identifier 517, a reserved identifier 518, a computing device identifier 519, a storage device identifier 520, a video codec device identifier 521, a JPEG codec device identifier 522, a PCIe identifier 523, and a reserved identifier 524.
The domain identifier 511 is used to mark a particular virtual function 407; the chip identifier 512 is used to record the chipset model of the source server; the board identifier 513 is used to record the board version or model of the source server.
The microcontroller identifier 514 records the version of the microcontroller of the source server, which is a general control element in the soc 106 for detecting or controlling the server environment, such as detecting or controlling the server temperature and operating frequency.
Firmware identifier 515 to document the source server's firmware version; host driver identifier 516 is used to record the host driver software version of the origin server; the virtual machine identifier 517 is used for recording the version of the virtual machine driver software of the source server; reservation identifier 518 and reservation identifier 524 are not used for the moment, and are used when other information needs to be described later.
The computing device identifier 519, the storage device identifier 520, the video codec device identifier 521, and the JPEG codec device identifier 522 are collectively referred to as a specific device identifier for describing the configuration of the specific hardware 408 in fig. 4. In more detail, the computing device identifier 519 is used to describe the configuration of the virtual computing device (e.g., the virtual computing device 142 of fig. 1) of the source server; the storage identifier 520 records the configuration of the virtual storage (e.g., the virtual storage 148 of FIG. 1) of the source server; the video codec identifier 521 is used to record the configuration of a virtual video codec (e.g. the virtual video codec 144 of fig. 1) of the source server; the JPEG codec identifier 522 is used to describe the configuration of the virtual JPEG codec of the source server (e.g., the virtual JPEG codec 146 of FIG. 1).
The PCIe identifier 523 is used to describe the configuration of a virtual interface (e.g., the virtual interface 140 of fig. 1) of the source server, where the virtual interface refers to the PCIe virtual interface assigned to a specific virtual function 407.
When the information to be migrated is data information, the data information is originally stored in a memory, where the memory is a virtual storage unit directly accessible by the specific hardware 408, and may be an internal storage space of the virtual computing device 142, the virtual video codec device 144, or the virtual JPEG codec device 146, for example, a virtual shared storage unit in the virtual computing device 142, and the memory may also be the virtual storage device 148. The serialization device 410 generates the data frame 55 to mount information. This embodiment considers that some complex scenarios may need to describe the association between data, so the serialization device 410 employs a specific symbol to display the association between data, so that the destination server can completely and accurately recover the data according to the information.
The data information described by the data structure layer 52 may be several data of different types but related. The serialization apparatus 410 of this embodiment defines a structure body including at least one type according to the association between data, and each type is composed of at least one variable (i.e. data). In other words, several related variables are grouped into one type, and several related types are grouped into one structure. These structures, types, variables and their relationships are stored in the aforementioned memory.
When a structure is marked, the serialization apparatus 410 adds a prefix in front of the name of the structure to serve as a start symbol for representing the structure, in this embodiment, a character string is used as a prefix symbol, and the prefix symbol may be any non-english letters and numbers, such as ".", "$", "/", "#", "%", "&", "-", and the like. For convenience of explanation, the following uses english period as a prefix.
Specifically, the data frame 55 generated by the serialization apparatus 410 includes a symbol identifier 525, a type identifier 526, a key identifier 527, and an entity identifier 528, which are used to describe and describe structures, types, and variables.
The symbol identifier 525 is used to mark the beginning of the structure or data frame 55, and the serializer 410 places the prefix symbol and the name of the structure in the symbol identifier 525 according to the protocol. Taking the structure name "foo _ nested" as an example, the symbol identifier 525 is described as ". foo _ nested". Since the source server and the destination server follow the same protocol, when the destination server recognizes the prefix symbol ". The prefix symbol is known to be followed by the name of the structure, and the identifiers that follow are all associated descriptions of the structure.
The type identifier 526 is used to record various types of the structure, including tree, image, linked list, heap, integer, floating point, etc. The name of the type may be defined by the serialization device 410 or used by the data store in memory. For example, if an integer a (with a value of 20) and an integer b (with a value of 10) are defined as the same type under the structure, and the serializing unit 410 names this type as "foo _ nested _ t", the type identifier 526 records the class names "foo _ nested _ t" of the integer a and the integer b.
The key identifier 527 is used to record the name of the variable under that type. When marking a variable, the serializing means 410 will prefix the variable name, according to the protocol, with the contents of the symbol identifier 525 plus the prefix symbol plus the variable name. Taking the aforementioned integer a and integer b as an example, since the type "foo _ nested _ t" has 2 variables, the integer a and the integer b, the serializing apparatus 410 describes the integer a in the data frame 55 first, and thus the key identifier 527 of the integer a is ". foo _ nested.a". The entity identifier 528 then records the value of the variable, and the integer a has a value of 20, so the entity identifier 528 directly records "20".
Since the type "foo _ nested _ t" has a variable b, after describing the key identifier 527 and the entity identifier 528 of the integer a, the key identifier 527 and the entity identifier 528 of the integer b are followed by the value "foo _ nested.b" and "10", respectively.
Fig. 6 shows the description of the structure "foo _ nested" in the data structure layer 52. As can be seen from the foregoing description, when a structure is described in information to be migrated, in this embodiment, a structure name is described in the symbol identifier 525, a name of a type under the structure is described in the type identifier 526, a variable name under the type is described in the key identifier 527, and a variable value or a string is described in the entity identifier 528. If the same type has multiple variables, the key identifier 527 and entity identifier 528 are repeated after describing the type identifier 526 for that type until all variables are described. If the structure has multiple types, all variables of the first type and the first type are described first, then all variables of the second type and the second type are described, and so on, and the members of the structure are described completely.
If the variables are simple structures such as numbers, strings, arrays, lists, etc., the data frame 55 is sufficient to record all information. When the variable has a complex structure, the entity identifier 528 is further expanded into the serialization layer 53 to serialize the complex structure. Returning to FIG. 5, the serializing means 410 generates a magic number identifier 529, a length identifier 530, a byte order identifier 531, a compression identifier 532, a type identifier 533, a key identifier 534, a count identifier 535, a format identifier 536, and a numeric identifier 537 under the serialization layer 53.
The magic number identifier 529 is a specific character used to indicate the beginning of a new data segment, i.e., the beginning of the serialization layer 53. when the destination server reads the magic number identifier 529, it can know the information of the next serialization layer 53 and perform the corresponding processing.
The length identifier 530 is used to indicate the length of the serialization layer 53.
Byte order identifier 531 is used to indicate the storage byte order of the data in the serialization layer 53, which is typically stored in big-endian mode or little-endian mode. The big-end mode means that the high bytes of data are stored in the low address of the memory and the low bytes of data are stored in the high address of the memory, and such a storage mode is similar to the sequential processing of data as a character string, i.e., the addresses increase from small to large and the data are stored from high to low. The small-end mode means that the high byte of data is stored in the high address of the memory, and the low byte of data is stored in the low address of the memory, the storage mode effectively combines the high and low of the address and the data bit, the weight of the high address part is high, and the weight of the low address part is low.
The compression identifier 532 is used to indicate the compressed form of the data information. While data is transmitted, it is compressed appropriately to reduce the amount of transmission, and this embodiment does not limit the compression form, but preferably uses bdi (base delta time) compression.
The type identifier 533 is used to identify the type of data information. The type identifier 533 is different from the type identifier 526, the type identifier 526 is used to describe various types under the structure, and the type identifier 533 is used to indicate the type of the data itself.
The key identifier 534 is used to identify the variable name under the type in the type identifier 533.
The count identifier 535 is used to indicate the number of variables under the type in the type identifier 533.
The format identifier 536 is used to indicate the format of the variable under the type in the type identifier 533, for example int 16, int 32, and int 64 indicate whether the variable is a 16-bit integer, a 32-bit integer, or a 64-bit integer, respectively.
The numeric identifier 537 is a numeric or string of values that records variables. In this embodiment, if there are multiple values in a variable, the format identifier 536 is directly followed by multiple value identifiers 537 to record each value, for example, if a variable is a list containing 128 values, the type identifier 533 indicates that the data is a list, the count identifier 535 indicates 128 values in total, and the value identifiers 537 have 128 values, which respectively store the 128 values.
More complex data like nested data requires the use of a serialization layer 53. Nested data refers to a data format in which one or more tables, images, layers, or functions are added to an existing table, image, layer, or function. How the serialization apparatus 410 serializes nested data will be described below.
When processing the nested data, since the nested data has a hierarchical structure, it is also necessary to record the nested data in a hierarchical sequence when performing the live migration storage. In fact, the nested data may include a multi-layer nested structure, and for convenience of description, the nested data with two layers of data will be taken as an example.
The serialization apparatus 410 also represents the nested data in a structural manner, and is divided into a first layer structural body (upper structural body) and a second layer structural body (lower structural body), i.e. the second layer structural body is nested in the first layer structural body. When generating the information to be migrated, the serialization apparatus 410 is divided into two segments in the data structure layer 52, and the symbol identifier 525 (first symbol identifier), the type identifier 526 (first type identifier), and the key identifier 527 (first key identifier) of the first segment respectively describe the information related to the first layer structure, and the serialization layer 53 of the first layer structure is expanded in the entity identifier 528 (first entity identifier).
The serialization apparatus 410 generates each identifier in the serialization layer 53 of the first layer structure as follows: the magic number identifier 529 indicates the beginning of the serialization of the first layer structure, the length identifier 530 indicates the length of the serialization layer 53 of the first layer structure, the byte order identifier 531 indicates whether the first layer structure is stored in a big-endian mode or a little-endian mode, the compression identifier 532 indicates the compressed form of the first layer structure, the type identifier 533 indicates the type of the first layer structure, the key identifier 534 indicates the name of the variable in the first layer structure, the count identifier 535 indicates the number of variables in the first layer structure, the format identifier 536 indicates the format of the variable in the first layer structure, and the numerical identifier 537 indicates the value of each variable in the first layer structure.
The serialization apparatus 410 generates a description second tier structure immediately after the entity identifier 528 of the first tier structure. The serializing means 410 generates a key identifier 527 (second key identifier) describing the name of the second level structure and expands the serialization layer 53 of the second level structure in an entity identifier 528 (second entity identifier). The identifiers of the second-layer structure in the serialization layer 53 are described in the same manner as the first-layer structure, and are not described again.
For example, assume the following nested data:
Figure BDA0002707397780000201
the code expresses that the nested data comprises a two-layer structure body, the name of the first-layer structure body is 'foo _ nested', three types are included, the first type is an integer number array, the name of a variable is 'array', and the number is {26,91,1029 }; the second type is a second-level structure including a number 91 and a string Hello world; the third type is the integer number 10029.
Fig. 7 shows a data structure generated when nested data is serialized. Since the description is given here only for describing the serialization of the nested data, the respective identifiers of the protocol layer 51 are omitted and are not described. In the data structure layer 52 generated by the serialization apparatus 410, the first symbol identifier 701 records the first layer structure name, ". foo _ nested"; the first type identifier 702 records a first layer structure type, i.e., "foo _ nested _ t"; the first key identifier 703 and the first entity identifier 704 describe information of the first type of array of integer numbers {26,91,1029}, wherein the first key identifier 703 describes its variable name ". foo _ nested. array", and the first entity identifier 704 describes 3 values of the array, i.e., the integer numbers 26,91, 1029. the first entity identifier 704 does not need to expand the serialization layer 53 since the array is simple data.
After the first type of information is described, the second type of information is then described. After the first entity identifier 704, a second type is described for the second key identifier 705, which is a second-level structure of nested data, with the name "foo 1", and according to the protocol, the second key identifier 705 is described as ". foo _ nested. foo 1", which is denoted as "foo 1" structure nested under the "foo _ nested" structure.
The serializing means 410 expands the serialized layer 53 at the second physical identifier 706 to represent the second layer structure, wherein the magic number identifier 707 indicates the start of the second layer structure, the length identifier 708 describes the length of the serialized layer 53, the byte order identifier 709 indicates that the second layer structure is stored in big-end mode or little-end mode, the compressed identifier 710 indicates the compressed form of the second layer structure, and the second symbol identifier 711 describes the name of the second layer structure, which is ". foo _ nested.foo 1" according to the protocol; the second type identifier 712 records the type name of the second layer structure, i.e., "foo 1_ t". Since the data {91 "Hello world" } of the second-layer structure includes the number "91" and the string "Hello world", the serialization apparatus 410 records the related information by two sets of key identifiers and entity identifiers, respectively. A third key identifier 713 and a third entity identifier 714 are used to record the number, wherein the third key identifier 713 records a variable name ". foo _ nested. foo1. integer", and the third entity identifier 714 records a value "91"; the fourth key identifier 715 and the fourth entity identifier 716 are used to describe the string, wherein the fourth key identifier 715 describes the variable name ". foo _ nested. foo1. str", and the fourth entity identifier 716 describes the string as "Hello world".
After the second type of information is identified, a third type of information is recorded in the data structure layer 52. The serializing means 410 records the variable name of the third type, ". foo _ nested.seq", after the second entity identifier 706, plus a fifth key identifier 717, and the variable number value of the third type, "10029", at a fifth entity identifier 718. At this point, the serialization apparatus 410 loads information of all nested data into the information to be migrated.
To sum up, when the serialization device 410 generates the information to be migrated, it may splice a plurality of identifiers according to actual requirements to appropriately extend the lengths of the data structure layer 52 and the serialization layer 53, and then record the total number of bytes of the information to be migrated in the byte identifier 507. In other words, the data structure layer 52 and the serialization layer 53 may include a plurality of symbol identifiers, type identifiers, key identifiers, or entity identifiers concatenated together, each describing a different data entity.
After the serialization apparatus 410 generates the information to be migrated, the physical function 406 sends the information to be migrated to the physical function driver 403 of the kernel space 104 in step 307, thereby completing the data serialization process.
Another embodiment of the present disclosure is a method for performing a hot migration saving path on a system, and in more detail, this embodiment is a flow for generating a data structure of information to be migrated in step 307, and fig. 8 shows a flowchart thereof.
In step 801, a live migration initiation request is received, the live migration initiation request specifying a live migration specific virtual function, the specific virtual function being one of a plurality of virtual functions. In step 306 of FIG. 3, the physical function driver 403 sends a warm migration initiation request to notify the physical function 406 that it is ready to migrate, which the physical function 406 receives, specifying a warm migration-specific virtual function 407.
In step 802, a data structure of information to be migrated is generated. The serialization device 410 responds to the live migration start request and generates a data structure of information to be migrated. This step can be further refined into the flow shown in fig. 9.
In step 901, a protocol layer of the data structure is generated; in step 902, generating a data structure layer of the data structure; in step 903, a serialization layer of the data structure is generated.
In step 904, at least one of a magic number identifier version identifier, a request response identifier, a command identifier, a sequence number identifier, a data source identifier, a byte identifier, a domain identifier, a reserved identifier, a payload identifier, etc. is generated in the protocol layer.
In step 905, it is determined whether the information to be migrated is configuration or data.
If so, step 906 is executed to generate at least one of a domain identifier, a chip identifier, a board identifier, a microcontroller identifier, a firmware identifier, a host driver identifier, a virtual machine identifier, a reserved identifier, a computing device identifier, a storage device identifier, a video codec device identifier, a JPEG codec device identifier, a PCIe identifier, and the like in the data structure layer.
If the data is the data, step 907 is executed to generate at least one of a symbol identifier, a type identifier, a key identifier, an entity identifier, and the like in the data structure layer.
Next, step 908 is performed, wherein at least one of a magic number identifier, a length identifier, a byte order identifier, a compression identifier, a type identifier, a key identifier, a count identifier, a format identifier, a numerical identifier, and the like is generated in the serialization layer.
The definitions and descriptions of these identifiers have been described in the foregoing embodiments, and details are not repeated.
In the last step 803, the information to be migrated is sent to the kernel space. After the serialization apparatus 410 generates the information to be migrated, the physical function 406 sends the information to be migrated to the physical function driver 403 of the kernel space 104.
Another embodiment of the present disclosure is a method for serializing nested data, where the nested data at least includes a first layer structure and a second layer structure, and the serialization device 410 generates information to be migrated in response to a live migration initiation request, where the step of generating the information to be migrated is shown in fig. 10. In step 1001, generating a first symbolic identifier in the data structure layer of the information to be migrated, so as to record the name of the first layer structure; in step 1002, a second symbolic identifier is generated in the serialization layer to describe the name of the second structure. The details of the serialized nested data are described in the foregoing embodiments and are not described in detail.
Through the description of the above embodiments, the present disclosure implements data serialization in the migration save path, and while executing the foregoing flow, for unspecific virtual functions and hardware, the tasks from the user space 102 are still executed, and are not affected.
Another embodiment of the present disclosure is to migrate the restoration path, and the destination server in this embodiment is also the system in fig. 1, and has the same environment as the source server. FIG. 11 is a flow diagram illustrating a migration restoration path, and FIG. 12 is a schematic diagram illustrating a migration restoration path in the environment of FIG. 1.In more detail, after the embodiment of fig. 3 and 4 completes the migration saving path, the information to be migrated is migrated to the destination server.
In step 1101, Libvirt 1201 initiates a request to QEMU 1202 for importing information to be migrated. QEMU 1202 receives the instruction to migrate issued in the embodiments of FIGS. 3 and 4 off-chip and initializes a warm migration initiation request. The off-chip refers to an origin server, and the origin server and the destination server may be on the same hardware platform or different hardware platforms.
In step 1102, VMLU QOM 1204 sends information to be migrated to physical function driver 1203. After receiving the instruction to be migrated, VMLU QOM 1204 responds to the hot migration initiation request, and calls the write-in function, and sends the information to be migrated in the write-in function to physical function driver 1203.
In step 1103, the physical function 1206 receives information to be migrated. In the previous step, VMLU QOM 1204 sends the information to be migrated to physical function driver 1203, and physical function driver 1203 sends the information to be migrated to physical function 1206.
In step 1104, the configuration, data, and context thereof are restored for the particular virtual function 1207 and the particular virtual hardware 1208.
First, the physical function 1206 idles the particular virtual function 1207 so that it does not communicate with the user space 102 for a while, other virtual functions are running as usual. After idling the specific virtual function 1207, the physical function 1206 sends information to be migrated to the specific virtual hardware 1208 via the specific virtual function 1207.
Likewise, the specific virtual hardware 1208 can be the virtual computing device, the specific virtual storage device, the virtual video codec device, or the virtual JPEG codec device of fig. 1. The information to be migrated includes drivers, firmware and hardware information, context information, state information, etc. associated with the particular virtual hardware 1208. After the reply, the specific virtual function 1207 and the specific virtual hardware 1208 have the same environment and data as the specific virtual function 407 and the specific virtual hardware 408.
In step 1105, the physical function 1206 reports to the physical function driver 1203 that the migration is completed, and after the instruction is sent, the physical function 1206 sends an end signal to the physical function driver 603 of the kernel space 104.
In step 1106, physical function driver 1203 notifies VMLU QOM 1204 that the thermal migration has been completed, i.e., physical function driver 1203 sends an end signal to QEMU 1202.
In step 1107, VMLU QOM 1204 changes state to notify virtual function driver 1205 that the live migration has been completed. VMLU QOM 1204 responds to the end signal by notifying virtual function driver 1205 that the live migration is complete, and changing the state of the base address register to point to the specific virtual function 1207 and the specific virtual hardware 1208 of the destination server.
In step 1108, the virtual function driver 1205 sends a control signal to the virtual function driver's interface 1209 to resume execution of the tasks of the guest operating system.
In step 1109, the virtual function driver's interface 1209 notifies the virtual function driver 1205 to resume execution of the guest operating system's tasks. The virtual function driver 1205 receives again tasks from the processors of the user space 102 through the virtual function driver's interface 1209 that no longer access the source server's specific virtual hardware 408, but instead access the destination server's specific virtual hardware 1208.
In step 1110, the VMLU QOM 1204 notifies Libvirt 1201 that the live migration is complete, and Libvirt 1201 cleans up the allocated hardware resources on the source server. This completes the migration restoration path.
In combination with the foregoing embodiments of the migration save path and the migration restore path, the present disclosure enables a live migration of a virtualized asic.
In more detail, the system of fig. 12 further includes a deserializer 1210 for responding to the warm migration initiation request, and restoring data such as the driver, firmware, and information of the specific virtual hardware 1208, the context information, and the state information thereof of the specific virtual function 1207 according to the information to be migrated in step 1104. The deserializing means 1210 of this embodiment may be implemented by hardware or firmware. If it is hardware, the deserializer 1210 is configured in the system-on-chip 106; if firmware, it is stored in a read-only memory device of the system-on-chip 106.
The deserializer 1210 implements the method of live migration recovery path, the flow of which is described in fig. 13. In step 1301, the deserializer 1210 receives information to be migrated. In step 1302, the deserializer 1210 deserializes the information of the protocol layer 51, which is refined to the flow of FIG. 14.
In step 1401, since the source server and the destination server follow the same protocol, the deserializing means 1210 may identify the data structure of fig. 5, which identifies from the magic number identifier 501 that this is the beginning of the protocol layer 51 of the information to be migrated. In step 1402, the version of the information to be migrated is identified from the version identifier 502 to confirm that the system version of the destination server is equal to or higher than the system version of the source server. In step 1403, it is identified from the request response identifier 503 that the information command is a request or a response, if the information command is a request, the migration recovery path is continuously executed, and if the information command is a response, the information command indicates that the information command is not information to be migrated, and the recovery is stopped.
Next, in step 1404, the task type of the information to be migrated is identified from the command identifier 504 as the migration status and data or the data dictionary is updated. In step 1405, the serial number of the information to be migrated is identified from the serial number identifier 505 to determine the ranking of the information to be migrated in the entire hot migration restoration path.
Next, in step 1406, a specific virtual suite is identified from the data source identifier 506, wherein the specific virtual suite includes at least one of a virtual computing device, a virtual video codec device, a virtual JPEG codec device and a virtual storage device, and the deserializing device 1210 restores the information to be migrated to the specific virtual suite according to the information of the data source identifier 506.
Next, in step 1407, the total number of bytes of information to be migrated or the total number of bytes of payload is identified from the byte identifier 507. In step 1408, the information of the specific virtual function is retrieved from the domain identifier 508, and the information to be migrated is restored to the specific virtual function 1207. In step 1409, information for the data structure layer 52 is retrieved from the payload identifier 510.
In step 1303, the deserializer 1210 starts deserializing the information of the data structure layer 52, and first determines whether the data structure layer 52 describes configuration information or data information.
If the data structure layer 52 records configuration information, step 1304 is performed to deserialize the configuration information, which is detailed in the flow chart of FIG. 15. In step 1501, the deserializing means 1210 fetches the information of the specific hardware from the domain identifier 511, and prepares to restore the information to be migrated to the specific hardware 1208. In step 1502, the source server's chipset model is identified from the chip identifier 512 to determine if it is compatible with the destination server's chipset. In step 1503, the board version or model of the source server is identified from the board identifier 513 to determine whether the board is compatible with the board of the destination server. In step 1504, the model number of the source server's microcontroller is identified from the microcontroller identifier 514 to determine if it is compatible with the destination server's microcontroller.
In step 1505, the source server's firmware version is then identified from the firmware identifier 515 to determine if it is compatible with the destination server's firmware. In step 1506, the source server's host driver version is identified from the host driver identifier 516 to determine if it is compatible with the destination server's host driver. In step 1507, the version of the virtual machine driver software of the source server is identified from the virtual machine identifier 517 to determine whether it is compatible with the virtual machine driver software of the destination server.
Next, in step 1508, information of the specific device identifier is retrieved, i.e., retrieved from the computing device identifier 519, the storage device identifier 520, the video codec device identifier 521, and the JPEG codec device identifier 522 to restore the configuration of the specific device, i.e., the specific hardware 1208, which is one of the virtual computing device, the virtual video codec device, the virtual JPEG codec device, and the virtual storage device.
Finally, in step 1509, the configuration of the virtual interface is restored according to the information of the PCIe identifier 523.
If the data structure layer 52 is used to record data information, step 1305 is executed to deserialize the data information, which is detailed in the flow of fig. 16. In step 1601, the deserializer 1210 identifies the beginning of the marker structure from the symbol identifier 525 and extracts the name of the structure, and more specifically, since the symbol identifier 525 includes a prefix symbol, the deserializer 1210 identifies the prefix symbol first and can identify the name of the structure and various identifiers thereafter from the prefix symbol. In step 1602, a type is identified from the type identifier 526. In step 1603, the name of the variable is extracted from the key identifier 527, and similarly, since the key identifier 527 includes a prefix symbol, the deserializer 1210 recognizes the prefix symbol first and can extract the name of the variable from the prefix symbol. In step 1604, information for the serialization layer 53 is retrieved from the entity identifier 528.
Returning to FIG. 13, step 1306 is then performed to identify or retrieve information for the serialization layer 53, which is detailed in the flow chart of FIG. 17. In step 1701, the deserializer 1210 then identifies the beginning of the serialization layer 53 from the magic number identifier 529. In step 1702, the length of the serialization layer 53 is identified from the length identifier 530. In step 1703, the stored endian of the data is identified from the byte order identifier 531 as big-endian mode or little-endian mode. In step 1704, a compressed form of the data is identified from the compression identifier 532. In step 1705, a type is identified from the type identifier 533. In step 1706, the variable name is retrieved from the key identifier 534. In step 1707, the number of variables is identified from the count identifier 535. In step 1708, a variable format is identified from format identifier 536. In step 1709, the value or string of variables is retrieved from the value identifier 537.
When more complex data is encountered during deserialization, such as nested data, the deserializer 1210 performs the following deserialization process.
When deserializing the nested data, the deserializing device 1210 is configured to receive information to be migrated, where the data structure layer 52 of the information to be migrated includes a first symbolic identifier, and the serialization layer 53 includes a second symbolic identifier, and retrieve first serialized data according to the first symbolic identifier; retrieving second serialized data based on the second symbol identifier; reducing the first serialized data to the first layer structure; and reducing the second serialized data into the second layer structure. And finally, storing the first layer of structural body and the second layer of structural body into a memory.
In more detail, the deserializer 1210 performs the flow shown in fig. 18 for the first layer structure. The nested data of fig. 19 will be explained as an example. In step 1801, information to be migrated is received, where a data structure of the information to be migrated includes a data structure layer 52 and a serialization layer 53, where the data structure layer 52 includes a first symbolic identifier 1901 and the serialization layer 53 includes a second symbolic identifier 1909. In step 1802, first serialized data, the structure name of which is "foo _ nested", is identified and retrieved based on the first symbol identifier 1901. In step 1803, a first type, having a type name of "foo _ nested _ t", is recovered from the first type identifier 1902. In step 1804, the variable name in the first layer structure is restored from the first key identifier 1903, which is "foo 1". In step 1805, the serialization layer 53 information, i.e., the information identifying or retrieving the second layer structure, is retrieved from the first entity identifier 1904, which may be further refined as the flow shown in fig. 20.
In step 2001, the beginning of the second level structure is identified from the magic number identifier 1905. In step 2002, the length of serialization layer 53 identified in length identifier 1906. In step 2003, the storage endianness of the data in the second tier structure is identified from the byte order identifier 1907. In step 2004, the compressed form of the data in the second-tier structure is identified from the compression identifier 1908. In step 2005, the name of the second layer structure is identified from the second symbol identifier 1909 as "foo _ nested _ foo 1". In step 2006, a second type is identified from the second type identifier 1910, named "foo _ t". In step 2007, the variable number is identified from the count identifier 1911. In step 2008, a variable format is identified from format identifier 1912. In step 2009, the name "integer" of the first variable is retrieved from the second key identifier 1913. In step 2010, the value "91" of the first variable is retrieved from the first numeric identifier 1914. In step 2011, the name "str" of the second variable is retrieved from the third key identifier 1915. In step 2012, the string "Hello world" for the second variable is retrieved from the second numeric identifier 1916. In step 2013, the first and second layer structures are stored, i.e., all information under the first and second layer structures is restored.
The deserializer 1210 deserializes the information to be migrated and restores the drivers, firmware and hardware information, context information and their state information of the specific virtual function 407 and specific hardware 408 in the source server to the memory of the specific virtual function 1207 and specific hardware 1208 on the destination server through the physical function 1206.
Fig. 21 is a block diagram illustrating an integrated circuit device 2100, in accordance with an embodiment of the disclosure. As shown in fig. 21, the integrated circuit device 2100, i.e., the system-on-chip 106 in the foregoing embodiments, includes a specific virtual suite 2102, wherein the specific virtual suite 2102 is at least one of a virtual computing device, a virtual video codec device, and a virtual JPEG codec device. Additionally, the integrated circuit device 2100 also includes a general interconnect interface 2104 and other processing devices 2106.
The other processing device 2106 may be one or more types of general purpose and/or special purpose processors such as a central processing unit, a graphics processing unit, an artificial intelligence processing unit, etc., and the number thereof is not limited but determined according to actual needs. The other processing device 2106 serves as an interface for the specific virtual suite 2102 to external data and control, and performs basic control including, but not limited to, data transfer, starting and stopping of the specific virtual suite 2102, and the like. Other processing devices 2106 may also cooperate with the particular virtual suite 2102 to perform computational tasks.
The universal interconnect interface 2104 may be used to transfer data and control instructions between the particular virtual suite 2102 and the other processing devices 2106. For example, the specific virtual suite 2102 can obtain required input data from the other processing devices 2106 via the universal interconnect interface 2104 and write the input data to a storage unit on the chip of the specific virtual suite 2102. Further, the specific virtual suite 2102 can obtain control instructions from the other processing devices 2106 via the universal interconnect interface 2104 and write the control instructions to a control cache on the specific virtual suite 2102 piece. Alternatively or in the alternative, the universal interconnect interface 2104 may also read data in a memory module of the particular virtual suite 2102 and transmit to the other processing device 2106.
The integrated circuit device 2100 also includes a storage device 2108 that can be connected to the specific virtual suite 2102 and other processing devices 2106, respectively. The storage device 2108 is a virtual storage device 148, and is used for storing data of the specific virtual suite 2102 and the other processing devices 2106, and is particularly suitable for storing all data that needs to be calculated, which cannot be stored in the specific virtual suite 2102 or the other processing devices 2106.
According to different application scenarios, the integrated circuit device 2100 can be used as a System On Chip (SOC) for devices such as a mobile phone, a robot, an unmanned aerial vehicle, and video capture, thereby effectively reducing the core area of a control part, increasing the processing speed, and reducing the overall power consumption. In this case, the universal interconnect interface 2104 of the integrated circuit device 2100 is connected to certain components of the apparatus. Some of the components herein may be, for example, a camera, a display, a mouse, a keyboard, a network card or a wifi interface.
The present disclosure also discloses a chip or integrated circuit chip that includes an integrated circuit device 2100. The present disclosure also discloses a chip package structure including the above chip.
Another embodiment of the present disclosure is a board card including the above chip package structure. Referring to fig. 22, the board 2200 may include, in addition to the plurality of chips 2202 described above, other kits including a memory device 2204, an interface device 2206, and a control device 2208.
The memory device 2204 is coupled to the chip 2202 within the chip package structure via a bus 2214 for storing data. The memory device 2204 may include multiple sets of memory cells 2210.
An interface device 2206 is electrically connected to the chip 2202 within the chip package structure. The interface device 2206 is used to enable data transmission between the chip 2202 and an external device 2212 (e.g., a server or a computer). In this embodiment, the interface device 2206 is a standard PCIe interface, and the data to be processed is transmitted from the server to the chip 2202 through the standard PCIe interface, so as to implement data transfer. The results of the calculations made by the chip 2202 are also communicated back to the external device 2212 by the interface device 2206.
The control device 2208 is electrically connected to the chip 2202 to monitor the state of the chip 2202. Specifically, chip 2202 and control device 2208 may be electrically connected by an SPI interface. The control device 2208 may include a single chip microprocessor ("MCU").
Another embodiment of the present disclosure is an electronic device or apparatus, which includes the board 2200. According to different application scenarios, the electronic device or apparatus may include a data processing apparatus, a robot, a computer, a printer, a scanner, a tablet computer, a smart terminal, a mobile phone, a vehicle data recorder, a navigator, a sensor, a camera, a server, a cloud server, a camera, a video camera, a projector, a watch, an earphone, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical device. The vehicle comprises an airplane, a ship and/or a vehicle; the household appliances comprise a television, an air conditioner, a microwave oven, a refrigerator, an electric cooker, a humidifier, a washing machine, an electric lamp, a gas stove and a range hood; the medical equipment comprises a nuclear magnetic resonance apparatus, a B-ultrasonic apparatus and/or an electrocardiograph.
Another embodiment of the disclosure is a computer readable storage medium having stored thereon serialized or deserialized computer program code which, when executed by a processor, performs the foregoing method.
The method and the system can realize the hot migration of the virtual specific function in the source server and the drive program, the firmware and the hardware information, the context information and the state information of the virtual hardware to the target server, utilize the serialization technology to generate the information to be migrated so as to be convenient to transmit, and the target server deserializes the information to be migrated based on the same protocol to recover the configuration and the data of the information.
The foregoing may be better understood in light of the following clauses:
clause a1, a system for serializing nested data comprising at least a first level structure and a second level structure, the system comprising: the memory is used for storing the nested data; and a serialization device, configured to respond to a live migration initiation request to generate information to be migrated, where a data structure of the information to be migrated includes: a data structure layer including a first symbolic identifier for recording a name of the first layer structure; and a serialization layer including a second symbolic identifier to document a name of the second layer structure.
Clause a2, the system of clause a1, wherein the first hierarchical structure comprises at least one first type, the second hierarchical structure comprises at least one second type, the serialization mechanism generates a first type identifier in the data structure layer to document the first type, and generates a second type identifier in the serialization layer to document the second type.
Clause A3 the system of clause a1, wherein the serialization apparatus generates a first key identifier in the data structure layer to document variable names in the first layer structure and generates a second key identifier in the serialization layer to document variable names in the second layer structure.
Clause a4, the system of clause a1, wherein the serialization mechanism generates an entity identifier in the data structure layer to document the serialization layer information.
Clause a5, the system of clause a1, wherein the serializing means generates a magic number identifier in the serialized layer to indicate the start of the serialized layer.
Clause a6, the system of clause a1, wherein the serializing means generates a length identifier in the serialized layer to represent the length of the serialized layer.
Clause a7, the system of clause a1, wherein the serializing means generates a count identifier in the serialization layer to indicate a variable number.
Clause A8, the system of clause a1, wherein the serializing means generates a format identifier in the serialization layer to indicate a variable format.
Clause a9, the system of clause a1, wherein the serializing means generates a numerical identifier in the serialized layer to document a variable value.
Clause a10, a system for deserializing nested data, the nested data including at least a first level structure and a second level structure, the system comprising: a deserializer to: receiving information to be migrated, wherein a data structure of the information to be migrated comprises: a data structure layer including a first symbol identifier; and a serialization layer comprising a second symbol identifier; retrieving first serialized data based on the first symbol identifier; retrieving second serialized data based on the second symbol identifier; reducing the first serialized data to the first layer structure; and reducing the second serialized data to the second layer structure; and the memory is used for storing the first layer of structural body and the second layer of structural body.
Clause a11, the system of clause a10, wherein the first hierarchical structure comprises at least one first type, the second hierarchical structure comprises at least one second type, the data structure layer comprises first type identifiers, the serialization layer comprises second type identifiers, the deserializing means is to: identifying the first type from the first type identifier; and identifying the second type from the second type identifier.
Clause a12, the system of clause a10, wherein the data structure layer comprises a first key identifier, the serialization layer comprises a second key identifier, the deserializing means is to: restoring variable names in the first layer of structure from the first key identifier; and restoring variable names in the second tier structure from the second key identifier.
Clause a13, the system of clause a10, wherein the data structure layer comprises an entity identifier, the deserializing means to: retrieving the serialization layer information from the entity identifier.
Clause a14, the system of clause a10, wherein the serialization layer comprises a magic number identifier, the deserializing means to: identifying a start of the second tier structure from the magic number identifier.
Clause a15, the system of clause a10, wherein the serialization layer comprises a length identifier, the deserializing means to: identifying a length of the serialization layer from the length identifier.
Clause a16, the system of clause a10, wherein the serialization layer comprises a count identifier, the deserializing means to: a variable number is identified from the count identifier.
Clause a17, the system of clause a10, wherein the serialization layer comprises a format identifier, the deserializing means to: a variable format is identified from the format identifier.
Clause a18, the system of clause a10, wherein the serialization layer comprises a numerical identifier, the deserializing means to: a variable value is taken from the value identifier.
Clause a19, an integrated circuit device, comprising the system of any one of clauses a 1-18.
Clause a20, a board comprising the integrated circuit device of clause a 19.
Clause a21, a method of serializing nested data comprising at least a first level structure and a second level structure, the method comprising: responding to a hot migration starting request, and generating information to be migrated, wherein the step of generating the information to be migrated comprises the following steps: generating a first symbol identifier in the data structure layer of the information to be migrated, wherein the first symbol identifier is used for recording the name of the first layer structure body; and generating a second symbol identifier in the serialization layer of the information to be migrated so as to record the name of the second layer structure.
Clause a22, a method of deserializing nested data, the nested data comprising at least a first level structure and a second level structure, the method comprising: receiving information to be migrated, wherein a data structure of the information to be migrated comprises: a data structure layer including a first symbol identifier; and a serialization layer comprising a second symbol identifier; retrieving first serialized data based on the first symbol identifier; retrieving second serialized data based on the second symbol identifier; reducing the first serialized data to the first layer structure; reducing the second serialized data to the second layer structure; and storing the first layer structure and the second layer structure.
Clause a23, a computer-readable storage medium having stored thereon computer program code for processing nested data, the computer program code, when executed by a processing apparatus, performing the method of clause a21 or 22.

Claims (23)

1. A system for serializing nested data, the nested data comprising at least a first tier structure and a second tier structure, the system comprising:
the memory is used for storing the nested data; and
the serialization device is used for responding to a hot migration starting request to generate information to be migrated, and the data structure of the information to be migrated comprises:
a data structure layer including a first symbolic identifier for recording a name of the first layer structure; and
a serialization layer including a second symbolic identifier to document a name of the second layer structure.
2. The system of claim 1, wherein the first hierarchy includes at least one first type, the second hierarchy includes at least one second type, the serialization mechanism generates a first type identifier at the data structure level to document the first type, and generates a second type identifier at the serialization level to document the second type.
3. The system of claim 1, wherein the serialization mechanism generates a first key identifier in the data structure layer to document variable names in the first layer structure and generates a second key identifier in the serialization layer to document variable names in the second layer structure.
4. The system of claim 1, wherein the serialization mechanism generates an entity identifier in the data structure layer to document the serialization layer information.
5. The system of claim 1, wherein the serializing means generates a magic number identifier in the serialization layer to indicate the beginning of the serialization layer.
6. The system of claim 1, wherein the serialization mechanism generates a length identifier in the serialization layer to represent a length of the serialization layer.
7. The system of claim 1, wherein the serializing means generates a count identifier in the serialization layer to indicate a variable number.
8. The system of claim 1, wherein the serializing means generates a format identifier in the serialization layer to indicate a variable format.
9. The system of claim 1, wherein the serializing means generates a numerical identifier in the serialization layer to document a variable value.
10. A system for deserializing nested data, the nested data comprising at least a first tier structure and a second tier structure, the system comprising:
a deserializer to:
receiving information to be migrated, wherein a data structure of the information to be migrated comprises:
a data structure layer including a first symbol identifier; and
a serialization layer comprising a second symbol identifier;
retrieving first serialized data based on the first symbol identifier;
retrieving second serialized data based on the second symbol identifier;
reducing the first serialized data to the first layer structure; and
reducing the second serialized data to the second layer structure; and
and the memory is used for storing the first layer of structural body and the second layer of structural body.
11. The system of claim 10, wherein the first layer structure comprises at least one first type, the second layer structure comprises at least one second type, the data structure layer comprises a first type identifier, the serialization layer comprises a second type identifier, the deserializing means is to:
identifying the first type from the first type identifier; and
identifying the second type from the second type identifier.
12. The system of claim 10, wherein the data structure layer comprises a first key identifier, the serialization layer comprises a second key identifier, the deserializing means is to:
restoring variable names in the first layer of structure from the first key identifier; and
restoring variable names in the second tier structure from the second key identifier.
13. The system of claim 10, wherein the data structure layer includes an entity identifier, the deserializing means to:
retrieving the serialization layer information from the entity identifier.
14. The system of claim 10, wherein the serialization layer includes a magic number identifier, the deserializing means to:
identifying a start of the second tier structure from the magic number identifier.
15. The system of claim 10, wherein the serialization layer includes a length identifier, the deserializing means to:
identifying a length of the serialization layer from the length identifier.
16. The system of claim 10, wherein the serialization layer includes a count identifier, the deserializing means to:
a variable number is identified from the count identifier.
17. The system of claim 10, wherein the serialization layer includes a format identifier, the deserializing means to:
a variable format is identified from the format identifier.
18. The system of claim 10, wherein the serialization layer includes a numeric identifier, the deserializing means to:
a variable value is taken from the value identifier.
19. An integrated circuit device comprising the system of any one of claims 1-18.
20. A board card comprising the integrated circuit device of claim 19.
21. A method of serializing nested data, the nested data comprising at least a first tier structure and a second tier structure, the method comprising:
responding to a hot migration starting request, and generating information to be migrated, wherein the step of generating the information to be migrated comprises the following steps:
generating a first symbol identifier in the data structure layer of the information to be migrated, wherein the first symbol identifier is used for recording the name of the first layer structure body; and
and generating a second symbol identifier in the serialization layer of the information to be migrated so as to record the name of the second layer structure.
22. A method of deserializing nested data, the nested data comprising at least a first tier of structures and a second tier of structures, the method comprising:
receiving information to be migrated, wherein a data structure of the information to be migrated comprises:
a data structure layer including a first symbol identifier; and
a serialization layer comprising a second symbol identifier;
retrieving first serialized data based on the first symbol identifier;
retrieving second serialized data based on the second symbol identifier;
reducing the first serialized data to the first layer structure;
reducing the second serialized data to the second layer structure; and
and storing the first layer structure and the second layer structure.
23. A computer readable storage medium having stored thereon computer program code for processing nested data, which when executed by a processing apparatus performs the method of claim 21 or 22.
CN202011043849.9A 2020-09-28 2020-09-28 Method, device and storage medium for realizing serialization and deserialization nested data Pending CN114328366A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202011043849.9A CN114328366A (en) 2020-09-28 2020-09-28 Method, device and storage medium for realizing serialization and deserialization nested data
PCT/CN2021/102073 WO2022062510A1 (en) 2020-09-28 2021-06-24 Device and method for implementing live migration
US18/003,689 US20230244380A1 (en) 2020-09-28 2021-06-24 Device and method for implementing live migration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011043849.9A CN114328366A (en) 2020-09-28 2020-09-28 Method, device and storage medium for realizing serialization and deserialization nested data

Publications (1)

Publication Number Publication Date
CN114328366A true CN114328366A (en) 2022-04-12

Family

ID=81011757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011043849.9A Pending CN114328366A (en) 2020-09-28 2020-09-28 Method, device and storage medium for realizing serialization and deserialization nested data

Country Status (1)

Country Link
CN (1) CN114328366A (en)

Similar Documents

Publication Publication Date Title
US9760291B2 (en) Secure migratable architecture having high availability
CN101901207B (en) Operating system of heterogeneous shared storage multiprocessor system and working method thereof
CN112286645B (en) GPU resource pool scheduling system and method
CN103034524B (en) Half virtualized virtual GPU
WO2017024783A1 (en) Virtualization method, apparatus and system
US4814975A (en) Virtual machine system and method for controlling machines of different architectures
CN115113973A (en) Configurable device interface
US20180074843A1 (en) System, method, and computer program product for linking devices for coordinated operation
JP2015503784A (en) Migration between virtual machines in the graphics processor
CN111309649B (en) Data transmission and task processing method, device and equipment
US20180217859A1 (en) Technologies for duplicating virtual machine states
WO2022001808A1 (en) System and interrupt processing method
CN113806006A (en) Method and device for processing exception or interrupt under heterogeneous instruction set architecture
CN114281467A (en) System method, device and storage medium for realizing heat migration
WO2021223744A1 (en) Method for realizing live migration, chip, board, and storage medium
CN113326226A (en) Virtualization method and device, board card and computer readable storage medium
CN112433823A (en) Apparatus and method for dynamically virtualizing physical card
CN111857943B (en) Data processing method, device and equipment
WO2022062510A1 (en) Device and method for implementing live migration
CN113568734A (en) Virtualization method and system based on multi-core processor, multi-core processor and electronic equipment
CN114328366A (en) Method, device and storage medium for realizing serialization and deserialization nested data
CN114281468A (en) Apparatus, associated method and readable storage medium for implementing thermomigration
CN114281749A (en) Apparatus, method and storage medium for implementing serialization and deserialization tree data
CN114281750A (en) Method, apparatus and storage medium for implementing serialized and deserialized logical pointers
US20230111884A1 (en) Virtualization method, device, board card and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination