CN116627933A - Mirror image warehouse migration method and device - Google Patents

Mirror image warehouse migration method and device Download PDF

Info

Publication number
CN116627933A
CN116627933A CN202310524051.3A CN202310524051A CN116627933A CN 116627933 A CN116627933 A CN 116627933A CN 202310524051 A CN202310524051 A CN 202310524051A CN 116627933 A CN116627933 A CN 116627933A
Authority
CN
China
Prior art keywords
source
image
warehouse
cloud
mirror
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310524051.3A
Other languages
Chinese (zh)
Inventor
梁晓雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202310524051.3A priority Critical patent/CN116627933A/en
Publication of CN116627933A publication Critical patent/CN116627933A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/214Database migration support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the disclosure discloses a mirror warehouse migration method and device. The specific implementation mode of the method comprises the following steps: creating a source-back read mirror warehouse on the target cloud; setting a source return address of the source-readable mirror image warehouse as an address of a mirror image warehouse on a source cloud; modifying an address pointed by an external domain name of a mirror image warehouse on the source cloud into an address of the source readable mirror image warehouse; traversing images in an image warehouse on the source cloud, and migrating each image into the readable source image warehouse. The embodiment realizes real-time migration of the continuous and universal large-data-volume mirror warehouse.

Description

Mirror image warehouse migration method and device
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a mirror image warehouse migration method and device.
Background
In the current cloud native state, most of delivery products of the platform are dock images which are irrelevant to the platform, mirror image products which accord with the oci standard, chart packages and the like. The articles are stored in a unified centralized dock mirror warehouse. In the cloud form, in order not to be single cloud kidnapped, a seamless migration scheme is needed that does not affect normal service nor the paas product being used normally. The mirrored warehouse is seamlessly migrated across clouds from one cloud to another.
Mirror image warehouse cross-cloud migration has two problems: migration of stock data and migration of delta data. At present, it is common practice to first migrate stock data, migrate services to new storage, and then migrate incremental data. Another approach relies on mirrored back-to-source rules for lower-level storage, i.e., incremental data back-to-source reads.
Stock data migration, migration service, incremental data migration: when the data volume is smaller (less than 100 t) and the concurrency per second is lower (less than 1000/s), the increment data is not increased too much in the process of transferring the stored data, and after transferring the stored data, the increment data is transferred again, so that the influence on the service is small and can be ignored. However, as the data volume increases to the P level and is concurrent to the tens of thousands (peak 70 k/s), a minimum of 4-5 natural days are taken during the migration of the stored data, during which the incremental data generated is not negligible. Seamless migration is not possible.
The migration of stock data and the addition of a mirrored back source rule of a storage level by new storage, namely incremental data back source reading, are strongly dependent on the functionality of underlying data and even on the storage of a certain cloud provider.
Disclosure of Invention
The embodiment of the disclosure provides a mirror warehouse migration method and device.
In a first aspect, an embodiment of the present disclosure provides a mirrored repository migration method, including: creating a source-back read mirror warehouse on the target cloud; setting a source return address of the source-readable mirror image warehouse as an address of a mirror image warehouse on a source cloud; modifying an address pointed by an external domain name of a mirror image warehouse on the source cloud into an address of the source readable mirror image warehouse; traversing images in an image warehouse on the source cloud, and migrating each image into the readable source image warehouse.
In some embodiments, the method further comprises: and deleting the image warehouse on the source cloud in response to the completion of the migration of all the images in the image warehouse on the source cloud.
In some embodiments, the method further comprises: in response to the source-back read image repository receiving a pull request for a target image from a client, checking whether the target image exists in the source-back read image repository; and if so, sending the target image to the client.
In some embodiments, the method further comprises: if not, acquiring the target image from an image warehouse on the source cloud; and sending the target image to the client.
In some embodiments, the method further comprises: and storing the target image into the readable image warehouse.
In some embodiments, the method further comprises: and responding to the push request of the target image received by the source-back read image warehouse from the client, and storing the target image into the source-back read image warehouse.
In some embodiments, the method further comprises: and responding to the image warehouse on the source cloud receiving a push request of a target image from a client, and storing the target image into the readable image warehouse of the source.
In a second aspect, embodiments of the present disclosure provide a mirrored repository migration apparatus, comprising: a creation unit configured to create a source-back read image repository on the target cloud; a source return unit configured to set a source return address of the source-readable mirror repository as an address of the mirror repository on the source cloud; the orientation unit is configured to modify the address pointed by the external domain name of the mirror image warehouse on the source cloud into the address of the source readable mirror image warehouse; and the migration unit is configured to traverse the images in the image warehouse on the source cloud and migrate each image into the readable image warehouse.
In some embodiments, the apparatus further comprises a deletion unit configured to: and deleting the image warehouse on the source cloud in response to the completion of the migration of all the images in the image warehouse on the source cloud.
In some embodiments, the apparatus further comprises a pulling unit configured to: in response to the source-back read image repository receiving a pull request for a target image from a client, checking whether the target image exists in the source-back read image repository; and if so, sending the target image to the client.
In some embodiments, the pull unit is further configured to: if not, acquiring the target image from an image warehouse on the source cloud; and sending the target image to the client.
In some embodiments, the apparatus further comprises a storage unit configured to: and storing the target image into the readable image warehouse.
In some embodiments, the apparatus further comprises a pushing unit configured to: and responding to the push request of the target image received by the source-back read image warehouse from the client, and storing the target image into the source-back read image warehouse.
In some embodiments, the apparatus further comprises a pushing unit configured to: and responding to the image warehouse on the source cloud receiving a push request of a target image from a client, and storing the target image into the readable image warehouse of the source.
In a third aspect, embodiments of the present disclosure provide an electronic device for mirrored repository migration, comprising: one or more processors; storage means having stored thereon one or more computer programs which, when executed by the one or more processors, cause the one or more processors to implement the method of any of the first aspects.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method according to any of the first aspects.
The embodiment of the disclosure provides a seamless migration scheme for migrating a mirror warehouse across clouds under large storage data. The real-time migration of the uninterrupted and universal large-data-volume mirror image warehouse is realized by constructing a recoverable source reading warehouse, a migration warehouse service, full-volume migration source warehouse data and unloading or deleting the source mirror image warehouse.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
Other features, objects and advantages of the present disclosure will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings:
FIG. 1 is an exemplary system architecture diagram in which an embodiment of the present disclosure may be applied;
FIG. 2 is a flow chart of one embodiment of a mirrored repository migration method according to the present disclosure;
FIG. 3 is a flow chart of yet another embodiment of a mirrored repository migration method according to the present disclosure;
FIG. 4 is a schematic illustration of one application scenario of a mirrored warehouse migration method according to the present disclosure;
FIG. 5 is a schematic structural view of one embodiment of a mirrored warehouse migration apparatus according to the present disclosure;
fig. 6 is a schematic diagram of a computer system suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
FIG. 1 illustrates an exemplary system architecture to which embodiments of the mirrored repository migration method or mirrored repository migration apparatus of the present disclosure may be applied.
As shown in fig. 1, the system architecture may include a management server, a source cloud and a mirror repository (old repository) on the source cloud, a target cloud and a source-readable mirror repository (new repository) on the target cloud, a client.
The management server is connected with various clouds through a wired or wireless network and is used for managing various cloud resources and mirror image warehouses, including adding and deleting cloud resources, distributing cloud resources, establishing the mirror image warehouse and deleting the mirror image warehouse.
The source cloud and the target cloud may belong to the same or different operators. The old warehouse stores stock data and the incremental data is directly stored in the new warehouse. Stock data also needs to be migrated to a new warehouse. The client may be a docker client, or kubelet, as long as it is a tool that can be used to push and pull mirrored data, and is not limited herein.
It should be noted that, the image repository migration method provided by the embodiment of the present disclosure is generally executed by the management server, and accordingly, the image repository migration device is generally disposed in the management server.
It should be understood that the number of management servers, source clouds and mirror image warehouses on the source cloud (old warehouses), target clouds and recoverable source read mirror image warehouses on the target clouds in fig. 1 (new warehouses) are merely illustrative. There may be any number of management servers, source clouds and mirror image repositories on the source cloud (old repository), target clouds and source readable mirror image repositories on the target cloud (new repository), as required by the implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a mirrored repository migration method according to the present disclosure is shown. The mirror image warehouse migration method comprises the following steps:
step 201, a source-back read mirror repository on a target cloud is created.
In this embodiment, an execution body (e.g., a management server) of the image repository migration method receives an image repository migration request, where the migration request includes an address of a source cloud and an address of a target cloud. The executing body may log into the container mirroring service console of the target cloud. At the top menu bar, the address of the target cloud is selected to create a mirrored repository. The image repository may be configured as a source-back read image repository that can be source-back to the old repository.
Step 202, setting a source return address of the source-readable mirror repository as an address of the mirror repository on the source cloud.
In this embodiment, the configuration information of the mirror warehouse includes a source return address, which is set as the address of the mirror warehouse on the source cloud. When the source-back read mirror image warehouse receives a request for pulling the target mirror image, if the target mirror image is not stored locally, the target mirror image can be searched in the mirror image warehouse on the source cloud, and the target mirror image is obtained after the search and is sent to the client side and is downloaded to the source-back read mirror image warehouse.
And 203, modifying the address pointed by the external domain name of the mirror image warehouse on the source cloud into the address of the readable mirror image warehouse of the source.
In this embodiment, the address originally pointed by the external domain name of the mirror image warehouse on the source cloud is self, and is now modified into the address of the source readable mirror image warehouse, and then the request for accessing the mirror image warehouse on the source cloud is sent to the address of the source readable mirror image warehouse, which is equivalent to redirection. The incremental mirrored data is directly sent to the new repository without being stored in the old repository for further migration.
Step 204, traversing the images in the image warehouse on the source cloud, and migrating each image into the readable image warehouse of the source.
In this embodiment, the images in the image repository on the source cloud are stock data, requiring a lot of time to migrate. These data are kept in the old repository until migration is complete and therefore the service is not interrupted. And sequentially migrating the images in the image warehouse on the source cloud to a new warehouse, thereby realizing the migration of the stored image data.
According to the method provided by the embodiment of the disclosure, through the steps, the migration of the mirror warehouse from the source cloud to the target cloud is finally completed, and in the whole process, the domain name service of the mirror warehouse for external service is always available, so that the target of seamless migration is achieved. The mirror image source-returning rule capable of reading back sources is adapted to the mirror image warehouse level, so that seamless migration of the mirror image warehouse becomes feasible and reliable, and the benefits and effects are obvious under the conditions of large data volume (P level) and high concurrency (70 k/s).
In some optional implementations of this embodiment, the method further includes: and deleting the image warehouse on the source cloud in response to the completion of the migration of all the images in the image warehouse on the source cloud. After the images in the old repository have all migrated to the new repository, the old repository may be deleted to free up resources for use by other services.
With further reference to FIG. 3, a flow 300 of yet another embodiment of a mirrored repository migration method is shown. The flow 300 of the mirrored repository migration method includes the steps of:
step 301, a source-back read mirror repository on a target cloud is created.
In step 302, the source return address of the source-readable mirror repository is set as the address of the mirror repository on the source cloud.
And step 303, modifying the address pointed by the external domain name of the mirror image warehouse on the source cloud into the address of the readable mirror image warehouse of the source.
Step 304, traversing the images in the image warehouse on the source cloud, and migrating each image to the readable image warehouse of the source.
Steps 301-304 are substantially identical to steps 201-204 and are therefore not described in detail.
In response to the source-back read image repository receiving a pull request for the target image from the client, a check is made as to whether the target image is present in the source-back read image repository, step 305.
In this embodiment, a k8s kubelet or Docker initiates a pull request for a Docker image to a source-recoverable read image repository built on top of the new cloud. The source-readable repository may check to see if it has the mirrored data itself.
Step 306, if present, the target image is sent to the client.
In this embodiment, the target image is sent to the client in response to the image pull request, if any.
And step 307, if the target image does not exist, acquiring the target image from an image warehouse on the source cloud, and sending the target image to the client.
In this embodiment, if the target image does not exist, the target image is sent to the client in response to the image pulling request after the image data is acquired from the image warehouse built on the source cloud
In some optional implementations of this embodiment, the method further includes: and storing the target image into the readable image warehouse.
And (3) the obtained source image data is subjected to local (target cloud image warehouse) disk-setting for one time while responding to the image pulling request. The incremental data received by the old repository again, possibly after the completion of the stock data migration, may be supplemented in such a way that data that has not migrated to the new repository may be supplemented. Seamless migration is achieved.
In some optional implementations of this embodiment, the method further includes: and responding to the push request of the target image received by the source-back read image warehouse from the client, and storing the target image into the source-back read image warehouse. The k8s kubelet, the Docker, and the other ori tools initiate push requests for Docker mirroring. And the target cloud mirror warehouse responds to a push request initiated by the client, namely the dropped disk dock mirror data is stored in the local storage. Through the steps, the incremental data of the mirror image warehouse are ensured to be kept in the mirror image warehouse of the target cloud; on the other hand, the pos or container dependent mirror image of the stock can be pulled normally, and synchronization is performed during pulling.
In some optional implementations of this embodiment, the method further includes: and responding to the image warehouse on the source cloud receiving a push request of a target image from a client, and storing the target image into the readable image warehouse of the source. Push requests received by the old repository are no longer dropped to the storage but forwarded to the new repository for storage. This may avoid the new repository from migrating data from the old repository.
With continued reference to fig. 4, fig. 4 is a schematic diagram of an application scenario of the mirrored repository migration method according to the present embodiment. In the application scenario of fig. 4, the cross-cloud seamless migration mirror warehouse flow is as follows:
1) Building a source-recoverable read mirror warehouse on the target cloud;
2) Setting a source of the source-readable mirror image warehouse as a mirror image warehouse on a source cloud;
3) And switching the external domain name of the mirror image warehouse to the source-recoverable mirror image warehouse built in the step 1, wherein the source-recoverable mirror image warehouse ensures the availability of stock mirror images and increment mirror images.
4) Performing total migration of stock mirror image data of a source cloud mirror image warehouse: traversing the mirror image of the mirror image warehouse on the source cloud, and carrying out mirror image migration by using a docker push or other tools. The process is outside the mirror warehouse, does not depend on the underlying storage, is applicable to general mirror warehouse tools, and has generality.
5) And after the process in the step 4 is completed, removing the source image warehouse in the step 1 and performing offline processing on the source image warehouse.
Through the steps, the migration of the mirror warehouse from the source cloud to the target cloud is finally completed, and in the whole process, the domain name service of the mirror warehouse for external service is always available, so that the target of seamless migration is achieved.
A primary dock mirror image pulling process:
1) The k8s kubelet or Docker initiates a pull request for a Docker image to a source-recoverable read image repository built on top of the new cloud.
2) The source readable repository checks itself for the mirrored data:
a) If so, responding to the mirror pull request of 1;
b) If the source cloud image data does not exist, the image data is obtained from an image warehouse built on the source cloud, and the obtained source image data is locally (target cloud image warehouse) landed one share while being pulled in an image in response 1. A primary docker mirror image pushing flow:
1) The k8s kubelet, the Docker, and the other ori tools initiate push requests for Docker mirroring.
2) And responding the push request initiated in the step 1 by the target cloud mirror warehouse, namely, the landing dock mirror image data to the local storage.
Through the steps, the incremental data of the mirror image warehouse are ensured to be kept in the mirror image warehouse of the target cloud; on the other hand, the pos or container dependent mirror image of the stock can be pulled normally, and synchronization is performed during pulling.
With further reference to fig. 5, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of a mirrored warehouse migration apparatus, where the apparatus embodiment corresponds to the method embodiment shown in fig. 2, and the apparatus is particularly applicable to various electronic devices.
As shown in fig. 5, the mirrored repository migration apparatus 500 of the present embodiment includes: a creation unit 501, a source return unit 502, a direction unit 503, and a migration unit 504. Wherein the creating unit 501 is configured to create a source-back read image repository on the target cloud; a source return unit 502 configured to set a source return address of the source-recoverable read image repository as an address of the image repository on the source cloud; a directing unit 503 configured to modify an address pointed by an external domain name of a mirror image warehouse on the source cloud into an address of the source readable mirror image warehouse; a migration unit 504 configured to traverse the images in the image repository on the source cloud, migrating each image into the source-readable image repository.
In this embodiment, specific processes of the creation unit 501, the source return unit 502, the orientation unit 503, and the migration unit 504 of the image repository migration apparatus 500 may refer to steps 201, 202, 203, and 204 in the corresponding embodiment of fig. 2.
In some optional implementations of the present embodiment, the apparatus 500 further includes a deleting unit (not shown in the drawings) configured to: and deleting the image warehouse on the source cloud in response to the completion of the migration of all the images in the image warehouse on the source cloud.
In some optional implementations of the present embodiment, the apparatus 500 further includes a pulling unit (not shown in the drawings) configured to: in response to the source-back read image repository receiving a pull request for a target image from a client, checking whether the target image exists in the source-back read image repository; and if so, sending the target image to the client.
In some optional implementations of this embodiment, the pull unit is further configured to: if not, acquiring the target image from an image warehouse on the source cloud; and sending the target image to the client.
In some optional implementations of the present embodiment, the apparatus 500 further includes a storage unit (not shown in the drawings) configured to: and storing the target image into the readable image warehouse.
In some optional implementations of the present embodiment, the apparatus 500 further includes a pushing unit (not shown in the drawings) configured to: and responding to the push request of the target image received by the source-back read image warehouse from the client, and storing the target image into the source-back read image warehouse.
In some optional implementations of the present embodiment, the apparatus 500 further includes a pushing unit (not shown in the drawings) configured to: and responding to the image warehouse on the source cloud receiving a push request of a target image from a client, and storing the target image into the readable image warehouse of the source.
It should be noted that, in the technical solution of the present disclosure, the related aspects of collecting, updating, analyzing, processing, using, transmitting, storing, etc. of the personal information of the user all conform to the rules of the related laws and regulations, and are used for legal purposes without violating the public order colloquial. Necessary measures are taken for the personal information of the user, illegal access to the personal information data of the user is prevented, and the personal information security, network security and national security of the user are maintained.
According to an embodiment of the disclosure, the disclosure further provides an electronic device, a readable storage medium.
An electronic device, comprising: one or more processors; and a storage device having one or more computer programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the method of flow 200 or 300.
A computer readable medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the method of flow 200 or 300.
Fig. 6 illustrates a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the various methods and processes described above, such as the mirrored repository migration method. For example, in some embodiments, the mirrored repository migration method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into RAM 603 and executed by the computing unit 601, one or more steps of the mirrored repository migration method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the mirrored repository migration method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a server of a distributed system or a server that incorporates a blockchain. The server can also be a cloud server, or an intelligent cloud computing server or an intelligent cloud host with artificial intelligence technology. The server may be a server of a distributed system or a server that incorporates a blockchain. The server can also be a cloud server, or an intelligent cloud computing server or an intelligent cloud host with artificial intelligence technology.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (10)

1. A mirrored repository migration method, comprising:
creating a source-back read mirror warehouse on the target cloud;
setting a source return address of the source-readable mirror image warehouse as an address of a mirror image warehouse on a source cloud;
modifying an address pointed by an external domain name of a mirror image warehouse on the source cloud into an address of the source readable mirror image warehouse;
traversing images in an image warehouse on the source cloud, and migrating each image into the readable source image warehouse.
2. The method of claim 1, wherein the method further comprises:
and deleting the image warehouse on the source cloud in response to the completion of the migration of all the images in the image warehouse on the source cloud.
3. The method of claim 1, wherein the method further comprises:
in response to the source-back read image repository receiving a pull request for a target image from a client, checking whether the target image exists in the source-back read image repository;
and if so, sending the target image to the client.
4. A method according to claim 3, wherein the method further comprises:
if not, acquiring the target image from an image warehouse on the source cloud;
and sending the target image to the client.
5. The method of claim 4, wherein the method further comprises:
and storing the target image into the readable image warehouse.
6. The method of claim 1, wherein the method further comprises:
and responding to the push request of the target image received by the source-back read image warehouse from the client, and storing the target image into the source-back read image warehouse.
7. The method of claim 1, wherein the method further comprises:
and responding to the image warehouse on the source cloud receiving a push request of a target image from a client, and storing the target image into the readable image warehouse of the source.
8. A mirrored repository migration apparatus, comprising:
a creation unit configured to create a source-back read image repository on the target cloud;
a source return unit configured to set a source return address of the source-readable mirror repository as an address of the mirror repository on the source cloud;
the orientation unit is configured to modify the address pointed by the external domain name of the mirror image warehouse on the source cloud into the address of the source readable mirror image warehouse;
and the migration unit is configured to traverse the images in the image warehouse on the source cloud and migrate each image into the readable image warehouse.
9. An electronic device for mirrored repository migration, comprising:
one or more processors;
a storage device having one or more computer programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-7.
10. A computer readable medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-7.
CN202310524051.3A 2023-05-10 2023-05-10 Mirror image warehouse migration method and device Pending CN116627933A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310524051.3A CN116627933A (en) 2023-05-10 2023-05-10 Mirror image warehouse migration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310524051.3A CN116627933A (en) 2023-05-10 2023-05-10 Mirror image warehouse migration method and device

Publications (1)

Publication Number Publication Date
CN116627933A true CN116627933A (en) 2023-08-22

Family

ID=87591171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310524051.3A Pending CN116627933A (en) 2023-05-10 2023-05-10 Mirror image warehouse migration method and device

Country Status (1)

Country Link
CN (1) CN116627933A (en)

Similar Documents

Publication Publication Date Title
CN112905537B (en) File processing method and device, electronic equipment and storage medium
WO2022170782A1 (en) Micro-service configuration method and apparatus, electronic device, system, and storage medium
CN112597126A (en) Data migration method and device
CN114443076A (en) Mirror image construction method, device, equipment and storage medium
KR20220026603A (en) File handling methods, devices, electronic devices and storage media
CN112671892A (en) Data transmission method, data transmission device, electronic equipment, medium and computer program product
CN115514718B (en) Data interaction method, control layer and equipment based on data transmission system
CN114070889B (en) Configuration method, traffic forwarding device, storage medium, and program product
CN116069497A (en) Method, apparatus, device and storage medium for executing distributed task
EP4092544A1 (en) Method, apparatus and storage medium for deduplicating entity nodes in graph database
CN115510036A (en) Data migration method, device, equipment and storage medium
CN116627933A (en) Mirror image warehouse migration method and device
CN112860796B (en) Method, apparatus, device and storage medium for synchronizing data
CN116503005A (en) Method, device, system and storage medium for dynamically modifying flow
CN114417070A (en) Method, device and equipment for converging data authority and storage medium
CN113126928A (en) File moving method and device, electronic equipment and medium
CN113781154A (en) Information rollback method, system, electronic equipment and storage medium
CN111258954B (en) Data migration method, device, equipment and storage medium
US11656950B2 (en) Method, electronic device and computer program product for storage management
CN114327271B (en) Lifecycle management method, apparatus, device and storage medium
CN113805858B (en) Method and device for continuously deploying software developed by scripting language
US11379147B2 (en) Method, device, and computer program product for managing storage system
CN114780022B (en) Method and device for realizing additional writing operation, electronic equipment and storage medium
CN116594764A (en) Application program updating method and device, electronic equipment and storage medium
CN115408195A (en) Batch task management method, equipment and storage medium for heterogeneous platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination