CN110958297A - Data migration method and system - Google Patents

Data migration method and system Download PDF

Info

Publication number
CN110958297A
CN110958297A CN201911038210.9A CN201911038210A CN110958297A CN 110958297 A CN110958297 A CN 110958297A CN 201911038210 A CN201911038210 A CN 201911038210A CN 110958297 A CN110958297 A CN 110958297A
Authority
CN
China
Prior art keywords
data
processing
data migration
node
migration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911038210.9A
Other languages
Chinese (zh)
Other versions
CN110958297B (en
Inventor
张佳玮
纪越峰
冯佳新
柏琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN201911038210.9A priority Critical patent/CN110958297B/en
Publication of CN110958297A publication Critical patent/CN110958297A/en
Application granted granted Critical
Publication of CN110958297B publication Critical patent/CN110958297B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a data migration method and a system, wherein the data migration method comprises the following steps: connecting a hardware node with a controller to control a switching node in the hardware node; synchronizing, packing and copying and transmitting the file data among the processing pools to realize mirror image data synchronization; and carrying out data migration on the virtualized container.

Description

Data migration method and system
Technical Field
The present invention relates to the field of networks, and in particular, to a data migration method and system.
Background
In view of the requirement of the 5G system on the access network architecture, 3GPP proposes a CU (central unit)/DU (distributed unit) high-level logic separation architecture as a 5GRAN infrastructure. Wherein RRC/PDCP (radio resource control protocol/packet data convergence protocol) is divided into CU, High-RLC/Low-RLC/High-MAC/Low-MAC/High-PHY is divided into DU, and Low-PHY/RF is divided into AAU. In addition, the cloud of the radio access network in 5G brings the advantage of flexible resource arrangement to the radio access network, and the main idea is that the processing function of the RAN is centralized, and a Virtual Machine (VM) or a container deployed in a processing pool is adopted to replace a traditional dedicated signal processing hardware platform. The functions (DU entities/CU entities) in the RAN are deployed in the VM or the container in a Virtual Network Function (VNF) mode, the utilization rate of hardware resources can be improved, the expandability of the system is enhanced, operators can realize seamless transition among different communication systems by directly upgrading software systems, and the operation cost is saved.
As shown in fig. 1, which is a simplified C-RAN architecture for deploying RAN functions based on containers, the processing functions of RRHs 1-RRH4 for receiving data are respectively deployed in processing pools and respectively processed by containers 1-4. The framework of a DU/CU co-ordination is considered here.
As shown in fig. 1, when the load in the processing pool 1 increases, in order to ensure the service quality, a load balancing policy is required, that is, the processing pool 2 with low load is allowed to carry the processing tasks in the processing pool 1. The main load balancing strategies in current research mainly include two main categories, namely processing task migration and processing function migration. Taking the DU/CU containerized deployment as an example, as shown in fig. 2:
1. service migration
Fig. 2 (a) shows a typical manner of processing task migration, in which an RRH2 is connected to a processing pool 2 by switching a node RRH2 originally connected to the processing pool 1 through a link at a switching node, and a container 2 for serving the RRH2 is newly built in the processing pool 2, and after the link switching and the container new building are completed, the container 2 originally in the processing pool 1 is stopped and deleted, so as to implement sharing of processing resources.
2. Virtual container migration
Fig. 2 (b) shows a typical manner of migration of processing functions, which is to copy the container 2 in the processing pool 1 into the processing pool 2, and implement the link switching at the switching node when the switching condition is satisfied (e.g. when the real-time changed data amount in the memory is lower than a certain threshold) according to the switching policy, connect the RRH2 into the new processing pool, and stop and delete the container no longer used in the processing pool 1 after the migration is completed.
The main goal of load balancing in C-RAN is to handle processing tasks in a part of the high-load processing pool by efficiently utilizing the low-load processing pool; or aggregating the processing tasks in a plurality of low-load processing pools into a small number of processing pools and closing the idle processing pools to achieve the reduction of energy consumption. Two key indicators for measuring the load balancing strategy are migration time and service suspension time.
For the load balancing implemented by the processing task migration manner described in (a) of fig. 2, the technical difficulty is to implement that RRHs can dynamically connect to different processing pools, and to orchestrate and control the creation and deletion of containers. The disadvantage is that the new container 2 is built in the processing pool 2 to process the traffic of RRH2, and the new container does not contain the data information in the container 2, so extra registration time is required for the RRH2 to map with the new container, and a long service suspension time is caused.
Load balancing by migrating containers as described in (b) of fig. 2, compared to the strategy shown in (a), greatly reduces the suspension time of service because the original state of the container is preserved by copying the container 2 from the processing pool 1 to the processing pool 2 completely. However, one problem with the policy of container full copy is that when the container itself occupies a large amount of disk storage resources, the copying of the container between different processing pools stored in different geographical locations will occupy a large amount of link resources between the two processing pools in a short period of time, and if the link itself is in a high load state, the process of copying the container will last for a long time, and rapid migration cannot be achieved.
In order to solve the above-mentioned drawbacks of the prior art, it is necessary to provide a new data migration method and system to replace the existing migration method.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a data migration method, wherein the data migration method includes:
connecting a hardware node with a controller to control a switching node in the hardware node;
synchronizing, packing and copying and transmitting the file data among the processing pools to realize mirror image data synchronization;
and carrying out data migration on the virtualized container.
The data migration method as described above, wherein, in interfacing the hardware node with the controller, an exchange node in a network is interfaced with the controller to exchange information in the network.
In the data migration method, in response to the inquiry that the network resource link occupancy is lower than the preset threshold, the file data between the processing pools is synchronized, packed and copied.
According to the data migration method, the source and destination processing pool for realizing migration is calculated according to the occupation condition of the processing pool resources in the network, the link load condition and the service delay requirement, and the migration of the DU/CU is realized according to the calculation result.
A second aspect of the present invention provides a data migration system, wherein the data migration system includes: a hardware node and a controller, wherein the hardware node is connected with the controller,
the hardware nodes comprise a switching node and a processing pool node;
the switching node is connected with a light path management module of the controller, which is used for collecting information of the switching node, so as to control the switching node;
the processing pool node is connected with a processing resource monitoring module of the processor for collecting the file system information and the load monitoring information reported by the processing pool node, and the processing pool node is also connected with a data migration and VNF management module of the processor for copying the file data and managing VNF.
The data migration system as described above, wherein the controller further includes a parameter information database connected to the optical path management module and the processing resource monitoring module.
The data migration system as described above, wherein the controller further includes a policy calculation module that implements mirrored data synchronization with the VNF management module through the data migration based on the link information in the parameter information database.
The data migration system as described above, wherein the policy calculation module calculates a source-destination processing pool for implementing migration according to a processing pool resource occupation condition, a link load condition, and a service delay requirement in a network, and controls the data migration and VNF management module to implement migration of the DU/CU according to a calculation result.
A third aspect of the present invention provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the data migration method as described above when executing the computer program.
A fourth aspect of the present invention proposes a computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the data migration method as described above.
The data migration method and the migration system can effectively reduce the migration time of the virtualization container (e.g., DU/CU) and realize rapid load balancing.
Drawings
Fig. 1 is a schematic diagram of a prior art clouded RAN architecture;
fig. 2 is a schematic diagram of a RAN load balancing manner in the prior art, in which (a) in fig. 2 represents processing task migration, and (b) in fig. 2 represents processing function migration.
FIG. 3 is a schematic diagram of a container file structure of the present invention;
FIG. 4 is a schematic structural diagram of a data migration system according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating the operation of a data migration system according to an embodiment of the present invention;
FIG. 6 is a flow diagram of data migration system initialization provided by an embodiment of the present invention;
FIG. 7 is a flow diagram of processing inter-pool mirror synchronization according to an embodiment of the present invention;
FIG. 8 is a flow chart of data copying in mirror synchronization according to an embodiment of the present invention;
FIG. 9 is a flow diagram of a mirrored data copy provided by an embodiment of the present invention;
FIG. 10 is a flowchart of DU/CU migration according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to specific embodiments and the accompanying drawings.
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative only and should not be construed as limiting the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It should be noted that all expressions using "first" and "second" in the embodiments of the present invention are used for distinguishing two entities with the same name but different names or different parameters, and it should be noted that "first" and "second" are merely for convenience of description and should not be construed as limitations of the embodiments of the present invention, and they are not described in any more detail in the following embodiments.
The technical solution of the embodiments of the present invention is described in detail below with reference to the accompanying drawings.
The invention provides a data migration method, wherein the data migration method comprises the following steps:
connecting the hardware node with a controller to control a switching node in the hardware node;
synchronizing, packing and copying and transmitting the file data among the processing pools to realize mirror image data synchronization;
and carrying out data migration on the virtualized container.
The invention also provides a data migration system, wherein the data migration system comprises: a hardware node and a controller, wherein the hardware node is connected with the controller,
the hardware nodes comprise a switching node and a processing pool node;
the switching node is connected with a light path management module of the controller, which is used for collecting information of the switching node, so as to control the switching node;
the processing pool node is connected with a processing resource monitoring module of the processor for collecting the file system information and the load monitoring information reported by the processing pool node, and the processing pool node is also connected with a data migration and VNF management module of the processor for copying the file data and managing VNF.
A specific embodiment of the data migration system of the present invention will now be described in detail with reference to fig. 3-10, and a process of an embodiment of the data migration method will be described in detail with reference to the embodiment of the data migration system, so as to make the present invention clear.
Specifically, as shown in fig. 3, when the DU (central unit)/CU (distribution unit) migration is implemented in a container-based copy migration manner, the migration speed is too slow due to the limitation of link bandwidth, which is caused by the fact that the container itself occupies too large data. Two main types are contained in the copied container data: the container file system and the runtime memory data, and the vast majority of the copied container data is the container file system. The container file system adopts a layered storage mechanism, the bottom layer is a mirror image layer of the container, the upper layer is a container layer, the data contents of the container mirror image layers created based on the same mirror image are completely consistent, and the data contents of the container layers are different. Therefore, if the processing pool 1 and the processing pool 2 can have a directory shared file system or a mirror resource pre-synchronization mechanism, mirror data between the processing pools is synchronized when a link is idle, and only data of a container layer and memory data need to be copied when container migration is needed, so that the copying time can be greatly reduced, and the instantaneous link bandwidth pressure between the processing pools can be relieved.
As shown in fig. 4, based on the above ideas, a containerized DU/CU migration system based on container mirror image file system sharing among multiple processing pools is proposed for the problem of service suspension time and instantaneous high link bandwidth occupation of the load balancing policy in the existing C-RAN (Cloud-Radio Access Network architecture). The system monitors the occupancy rate of the processing resources of the resource pool, defines the migration condition and the migration strategy method, and aims at the condition that the processing pool is overloaded and the service is rejected due to the rapid increase of the user quantity in the designated area in the next generation mobile network. Compared with the traditional task migration mechanism, the migration system can effectively reduce the service stop time and greatly relieve the instantaneous link load caused by container copy. The invention is detailed as follows:
1. adaptive migration system architecture
The migration system in the present invention is divided into a hardware node and a migration system controller, as shown in fig. 4:
(1) hardware node
The hardware nodes mainly comprise a switching node and a processing pool node. The switching nodes can include optical switching nodes, electric switching nodes and the like, the switching nodes realize control plane and data plane separation in an SDN (software Defined network) mode, and realize unified control at a control layer through protocols such as openflow and NETCONF. The processing pool nodes are managed through VIM in an NFV (network function virtualization) mode, so that the abstraction and arrangement management of the processing pool resources are realized; in the processing pool nodes, a data copy agent is arranged for synchronizing file data between processing pools, packaging memory data and copying and transmitting the memory data; the file management system is used for monitoring the file states (including container mirror image layer files, container layer files, configuration files and the like) in the processing pool nodes, and the states are read and processed by the control layer and are used for realizing mirror image file synchronization between the processing pools; the load monitoring module is used for monitoring the occupation condition of resources in the processing pool in real time, reporting information to the controller in real time and enabling the controller to decide to execute a load balancing strategy among the resource pools properly.
(2) DU/CU migration system controller
The migration system controller comprises a strategy calculation module, a parameter information database, a light path management module, a data migration and VNF management module and a processing pool resource monitoring module.
Parameter information database: for recording node information (switching nodes, processing pool nodes, etc.) in the network and the network topology;
the light path management module: the system is responsible for collecting the information of the switching nodes, storing the information in a parameter information database and realizing the control of the switching nodes;
a processing resource monitoring module: collecting file system information and load monitoring information reported by the processing pool nodes and updating corresponding contents in the parameter database in real time;
data migration and VNF (virtual network architecture) management module: the method is mainly responsible for copying of file data (data synchronization between idle processing pools and data copying during migration) and VNF management;
a policy calculation module: the method is mainly used for calculating strategies and arranging the whole system for migrating the core of the system controller.
2. Migration System detailed description
The migration system is divided into a system initialization phase, an operation phase and an end phase as shown in fig. 5, where the operation phase is a main body designed in this patent and mainly includes two parts, namely, mirroring synchronization between processing pools and DU/CU migration.
(1) System initialization
The system initialization signaling flow is shown in fig. 6. All hardware nodes in the network need to perform information synchronization with the controller in the network during the system initialization process to complete the system initialization process. Wherein the hardware nodes include a switching node and a processing pool node. The switching node establishes connection through a given port in the light path management module, and a load monitoring module inside the node and a processing pool resource monitoring module in the controller establish connection interaction information in the processing pool node. In the system initialization process, the steps of interaction between the hardware nodes including the switching node and the processing pool node and the controller are divided into three steps. (1) And (3) access request: and (3) establishing a connection between the hardware node and the controller (the message establishes the connection through a TCP (transmission control protocol), an Opernflow protocol, a netconf protocol and the like, and the version number is negotiated after the connection is successfully established). (2) The controller requests the hardware node for node information: in the process, the controller sends an information request message to the hardware node to request the hardware node to upload the detailed parameters of the hardware node. (3) The hard node replies the state information of the controller node, specifically: for example, the switching node uploads a flow table, the number of supported buffers, and the processing pool node uploads processing pool CPU, memory, file system, network parameters, container information in the current processing pool, etc. And finally, the corresponding module in the controller stores the information into a parameter information database in the controller, and the initialization process is finished.
(2) Mirror synchronization between processing pools
The synchronization flow is shown in fig. 7:
for the switching node in the network, the optical path information, including flow information, port information, etc., is reported to the management module in the controller periodically, so as to ensure that the latest state in the network is stored in the controller. For processing pool nodes in the network, (1) the processing pool monitors file system information in the processing pool through a node internal load monitoring module, reports the file system information to a processing pool resource monitoring module in the controller and updates the file system information to a parameter information database. (2) In the process of processing the parameter information base updated by the pool resource monitoring module, comparison and update are carried out. If the current information is found to be out of synchronization with the data in the parameter information database, step (3) is performed. (3) And (4) when the information of the processing pool file system in the database is changed, sending the information change message to the strategy calculation module, and executing the step (4). (4) The strategy calculation module inquires the link information in the database, judges whether the available network bandwidth between the changed processing pool and other processing pools in the current network is higher than a specified threshold value, if so, executes the step (5), otherwise, tries to execute the step (4) again after sleeping for a certain time. (5) If there are available network resources in the network, the policy computation module will notify the data migration and VNF management module of the processing pool that needs to perform resource synchronization, and the data migration and VNF management module controls the processing pool to finish the mirror data copy operation between the processing pools, as shown in fig. 8 in detail, if the copy is successful, the operation of step (6) is performed, and if the copy fails, the operation returns to step (4). (6) After the data copy update between the processing pools is completed, the processing pools can synchronize information to the controller by continuing to periodically. Note that the system initializes a value of the number of times of copy synchronization, stops the operation if the number of times of execution failure of the above flow exceeds a given value, and does not perform the operation for a certain period of time.
FIG. 8 is an implementation of mirror data copy and synchronization, the synchronization and copy process of mirror data between processing pools involving a data migration and management module in a controller and a data copy agent module in a processing pool. The method comprises the following three steps: (1) and (3) state query: a data migration and VNF management module in the controller establishes connection with data copy agents in a processing pool of mirror image information to be synchronized respectively and inquires available ports in the current processing pool; and the data copy agent in the processing pool inquires the available port in the processing pool and reports the port to the data migration and management module. This process corresponds to the a-d process in fig. 9. (2) Distributing ports, configuring data transmission content: and the data migration and VNF management module in the controller determines a specific port for data copying according to the port information reported by the data copying agent in the processing pool. The port information will be informed to the data copy agent in the corresponding processing pool together with the information of the mirror image file to be synchronized. After the information is reported successfully, the mirror image data copy process will be sent through the port, which corresponds to the e-h process in fig. 9. (3) And the data copy agent between the processing pools resumes TCP connection and copies corresponding mirror image information through ports distributed by the controller, and notifies the data migration and VNF management module in the controller after the corresponding mirror image information is successfully copied. This process corresponds to the i-k process in fig. 9.
The invention is oriented to a CU-DU-RRH three-level architecture in a Radio Access Network (RAN) environment, and solves the problem of unbalanced processing pool resource load caused by the space and time dynamics of network flow. In a clouded RAN, DU/CU processing units are typically deployed in a processing pool by way of Virtual Machines (VMs) or containers to provide the data processing functions of the RAN. The processing pool provides the infrastructure for the deployment of VMs/containers to build a virtualized environment, and the Virtual Infrastructure Manager (VIM) manages the allocation of resources in the processing pool, allocating enough computing resources (i.e., physical CPU cores/threads) for the VMs/containers where the DUs/CUs are located to enable them to complete data processing.
However, as the user traffic load increases, the resource occupation requirement of each VM/container increases, and if the processing pool cannot provide the resources required by the VM/container, the performance of the RAN function will be reduced, so that to meet the service requirement, the resource scheduling between the multiple processing pools must be performed, and the network load balancing problem is solved by comprehensively considering the load condition between the processing pools and the qos (quality of service) requirement of the user. Conventional load balancing schemes are broadly divided into two categories: firstly, service traffic is migrated between processing pools, and the migration of the service is realized by changing the physical connection between the RRH and the processing pools, and the migration requires the RRH to re-register, which may cause long service interruption; second, virtual container migration, in which a baseband processing function is deployed in a virtual container (VM/Docker), load balancing is implemented by migrating a VM between different processing pools, and this migration manner may cause a short service interruption, but all resources and data (image files, memory data, etc.) in the VM/container need to be copied between processing pools, which causes a lot of waste in bandwidth disruption, and at the same time, when an image file is too large, the copy time is too long, which causes a long migration time. The two load balancing strategies have the defects of overlong service suspension time or overlong migration time, so that the load balancing efficiency is low.
For RAN, the patent proposes a containerized container migration system and method based on mirror resource pre-synchronization. The migration system deploys DU/CU containers, and conducts DU/CU migration decision by sensing load of processing pools in RAN so as to achieve load balance among the processing pools in the RAN. Different from the conventional migration method, the migration method provided by the patent effectively reduces the migration time of the virtualized container (e.g., DU/CU) to realize rapid load balancing by pre-synchronizing the container mirror layer resources deployed between the RAN processing pools.
In the invention, under a CU-DU-RRH three-level RAN architecture, a migration architecture and a scheme of a virtualization container are designed, and the problem of service performance reduction caused by resource processing capacity preemption in a high-load area due to space and time dynamics of a mobile network is solved. Compared with the prior solution, the invention provides the container mirror image layer resource synchronization, realizes the sharing of the container mirror image resource information among the multiple processing pools through the idle data synchronization, relieves the increase of network load caused by the copying of a large amount of mirror image data information under the condition of a burst migration request, and can greatly reduce the migration time. Meanwhile, the patent provides an overall control scheme of container migration, and detection, management and migration of containerized DU/CU can be dynamically realized by combining SDN and NFV technologies.
An embodiment of the invention provides terminal equipment. The terminal device of this embodiment includes: a processor, a memory, and a computer program, such as a data migration program, stored in the memory and executable on the processor. The processor, when executing the computer program, implements the steps in the embodiments of the data migration method described above. Alternatively, the processor, when executing the computer program, implements the functions of each module/unit in the embodiments of the data migration systems, for example, the functions of each module of the systems.
Illustratively, a computer program may be partitioned into one or more modules/units, which are stored in a memory and executed by a processor to implement the present invention. One or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of a computer program in a terminal device.
The terminal device may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor, a memory. Those skilled in the art will appreciate that it is merely an example of a terminal device and does not constitute a limitation of terminal devices, and that more or fewer components may be included, or certain components may be combined, or different components may be included, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage may be an internal storage unit of the terminal device, such as a hard disk or a memory of the terminal device. The memory may also be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal device. Further, the memory may also include both an internal storage unit of the terminal device and an external storage device. The memory is used for storing computer programs and other programs and data required by the terminal device. The memory may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the idea of the invention, also features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity.
In addition, well known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown within the provided figures for simplicity of illustration and discussion, and so as not to obscure the invention. Furthermore, devices may be shown in block diagram form in order to avoid obscuring the invention, and also in view of the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform within which the present invention is to be implemented (i.e., specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the invention, it should be apparent to one skilled in the art that the invention can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative instead of restrictive.
While the present invention has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of these embodiments will be apparent to those of ordinary skill in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic ram (dram)) may use the discussed embodiments.
Those skilled in the art will appreciate that the present invention includes apparatus directed to performing one or more of the operations described in the present application. These devices may be specially designed and manufactured for the required purposes, or they may comprise known devices in general-purpose computers. These devices have stored therein computer programs that are selectively activated or reconfigured. Such a computer program may be stored in a device (e.g., computer) readable medium, including, but not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magnetic-optical disks, ROMs (Read-Only memories), RAMs (Random Access memories), EPROMs (Erasable programmable Read-Only memories), EEPROMs (Electrically Erasable programmable Read-Only memories), flash memories, magnetic cards, or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a bus. That is, a readable medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer). It will be understood by those within the art that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. Those skilled in the art will appreciate that the computer program instructions may be implemented by a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implement the features specified in the block or blocks of the block diagrams and/or flowchart illustrations of the present disclosure.
Those of skill in the art will appreciate that various operations, methods, steps in the processes, acts, or solutions discussed in the present application may be alternated, modified, combined, or deleted. Further, various operations, methods, steps in the flows, which have been discussed in the present application, may be interchanged, modified, rearranged, decomposed, combined, or eliminated. Further, steps, measures, schemes in the various operations, methods, procedures disclosed in the prior art and the present invention can also be alternated, changed, rearranged, decomposed, combined, or deleted. It should be understood by one of ordinary skill in the art that the above discussion of any embodiment is meant to be exemplary only, and is not intended to intimate that the scope of the disclosure, including the claims, is limited to these examples; within the idea of the invention, also features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity. Therefore, any omissions, modifications, substitutions, improvements and the like that may be made without departing from the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. A data migration method, characterized in that the data migration method comprises:
connecting a hardware node with a controller to control a switching node in the hardware node;
synchronizing, packing and copying and transmitting the file data among the processing pools to realize mirror image data synchronization;
and carrying out data migration on the virtualized container.
2. The data migration method of claim 1, wherein in interfacing the hardware node with a controller, a switching node in a network is interfaced with the controller to exchange information in the network.
3. The data migration method of claim 2, wherein the file data between the processing pools is synchronized, packed with memory data, and transmitted in a copy manner in response to a query that the network resource link occupancy is lower than a preset threshold.
4. The data migration method according to claim 3, wherein a source-destination processing pool for implementing migration is calculated according to a processing pool resource occupation situation, a link load situation, and a service delay requirement in the network, and the migration of the DU/CU is implemented according to the calculation result.
5. A data migration system, characterized in that the data migration system comprises: a hardware node and a controller, wherein the hardware node is connected with the controller,
the hardware nodes comprise a switching node and a processing pool node;
the switching node is connected with a light path management module of the controller, which is used for collecting information of the switching node, so as to control the switching node;
the processing pool node is connected with a processing resource monitoring module of the processor for collecting the file system information and the load monitoring information reported by the processing pool node, and the processing pool node is also connected with a data migration and VNF management module of the processor for copying the file data and managing VNF.
6. The data migration system of claim 5, wherein said controller further comprises a parameter information database interfaced with said light path management module and said processing resource monitoring module.
7. The data migration system of claim 6, wherein the controller further comprises a policy computation module that synchronizes mirrored data with a VNF management module via the data migration based on link information in the parameter information database.
8. The data migration system according to claim 7, wherein the policy calculation module calculates a source-destination processing pool for implementing migration according to a processing pool resource occupation condition, a link load condition, and a service delay requirement in a network, and controls the data migration and VNF management module to implement migration of the DU/CU according to a calculation result.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the data migration method according to any one of claims 1 to 4 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the data migration method according to any one of claims 1 to 4.
CN201911038210.9A 2019-10-29 2019-10-29 Data migration method and system Active CN110958297B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911038210.9A CN110958297B (en) 2019-10-29 2019-10-29 Data migration method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911038210.9A CN110958297B (en) 2019-10-29 2019-10-29 Data migration method and system

Publications (2)

Publication Number Publication Date
CN110958297A true CN110958297A (en) 2020-04-03
CN110958297B CN110958297B (en) 2021-10-01

Family

ID=69976484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911038210.9A Active CN110958297B (en) 2019-10-29 2019-10-29 Data migration method and system

Country Status (1)

Country Link
CN (1) CN110958297B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112445573A (en) * 2020-11-04 2021-03-05 许继集团有限公司 Edge Internet of things agent resource scheduling method and device based on standby mechanism
CN113672354A (en) * 2021-08-25 2021-11-19 广东浪潮智慧计算技术有限公司 Virtual machine migration method and related device
CN115858503A (en) * 2023-02-28 2023-03-28 江西师范大学 Heterogeneous database migration management method and system based on migration linked list

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090260047A1 (en) * 2008-04-15 2009-10-15 Buckler Gerhard N Blade center kvm distribution
CN104040485A (en) * 2012-01-09 2014-09-10 微软公司 PAAS hierarchial scheduling and auto-scaling
CN104579732A (en) * 2013-10-21 2015-04-29 华为技术有限公司 Method, device and system for managing virtualized network function network elements
CN106657330A (en) * 2016-12-22 2017-05-10 北京华为数字技术有限公司 User data migration method and user data backup method, device and system
CN106936882A (en) * 2015-12-31 2017-07-07 深圳先进技术研究院 A kind of electronic article transaction system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090260047A1 (en) * 2008-04-15 2009-10-15 Buckler Gerhard N Blade center kvm distribution
CN104040485A (en) * 2012-01-09 2014-09-10 微软公司 PAAS hierarchial scheduling and auto-scaling
CN104579732A (en) * 2013-10-21 2015-04-29 华为技术有限公司 Method, device and system for managing virtualized network function network elements
CN106936882A (en) * 2015-12-31 2017-07-07 深圳先进技术研究院 A kind of electronic article transaction system
CN106657330A (en) * 2016-12-22 2017-05-10 北京华为数字技术有限公司 User data migration method and user data backup method, device and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭帅: "网络功能虚拟化平台研究", 《中国硕士学位论文全文数据库(电子期刊)》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112445573A (en) * 2020-11-04 2021-03-05 许继集团有限公司 Edge Internet of things agent resource scheduling method and device based on standby mechanism
CN113672354A (en) * 2021-08-25 2021-11-19 广东浪潮智慧计算技术有限公司 Virtual machine migration method and related device
CN113672354B (en) * 2021-08-25 2024-01-23 广东浪潮智慧计算技术有限公司 Virtual machine migration method and related device
CN115858503A (en) * 2023-02-28 2023-03-28 江西师范大学 Heterogeneous database migration management method and system based on migration linked list

Also Published As

Publication number Publication date
CN110958297B (en) 2021-10-01

Similar Documents

Publication Publication Date Title
CN110958297B (en) Data migration method and system
US10908936B2 (en) System and method for network function virtualization resource management
CN112867050B (en) UPF network element management method and system
WO2017214932A1 (en) Network-slice resource management method and apparatus
WO2016206456A1 (en) Physical machine upgrading method, service migration method and apparatus
WO2016029726A1 (en) Energy-saving control method, management server and network device
CN110709818B (en) Method and apparatus for resource management in edge clouds
CN104601680B (en) A kind of method for managing resource and device
US20200015102A1 (en) Network slice instance management method, apparatus, and system
CN112463366A (en) Cloud-native-oriented micro-service automatic expansion and contraction capacity and automatic fusing method and system
CN105391651B (en) Virtual optical network multi-layer resource convergence method and system
CN103002065A (en) Method and device for sharing internet protocol (IP) address by host device and standby device
CN106576260B (en) Strategy coordination method and device in NFV system
KR20220126764A (en) Master Data Placement in Distributed Storage Systems
CN113645146B (en) New stream density-based software defined network controller load balancing method and system
CN112395269B (en) MySQL high availability group building method and device
CN109565447A (en) Network function processing method and relevant device
CN104539551A (en) Virtual service migration method for routing and switching platform
JP2024513628A (en) RIC SDK
CN110209498B (en) Cross-available-area resource scheduling method based on private cloud
EP3806389A1 (en) Virtual subnet constructing method and device, and storage medium
JP6591045B2 (en) Method and network service apparatus for migrating network service
WO2018196651A1 (en) Resource management method and device
EP4371330A1 (en) State pooling for stateful re-homing in a disaggregated radio access network
US10986036B1 (en) Method and apparatus for orchestrating resources in multi-access edge computing (MEC) network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant