CN110058922B - Method and device for extracting metadata of machine learning task - Google Patents

Method and device for extracting metadata of machine learning task Download PDF

Info

Publication number
CN110058922B
CN110058922B CN201910208590.XA CN201910208590A CN110058922B CN 110058922 B CN110058922 B CN 110058922B CN 201910208590 A CN201910208590 A CN 201910208590A CN 110058922 B CN110058922 B CN 110058922B
Authority
CN
China
Prior art keywords
metadata
machine learning
training
type
learning task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910208590.XA
Other languages
Chinese (zh)
Other versions
CN110058922A (en
Inventor
刘烨东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XFusion Digital Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910208590.XA priority Critical patent/CN110058922B/en
Publication of CN110058922A publication Critical patent/CN110058922A/en
Priority to PCT/CN2020/070577 priority patent/WO2020186899A1/en
Application granted granted Critical
Publication of CN110058922B publication Critical patent/CN110058922B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45562Creating, deleting, cloning virtual machine instances
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45583Memory management, e.g. access or allocation

Abstract

The application provides a method for extracting metadata in a machine learning task, which is applied to a virtualization environment and comprises the following steps: running a machine learning task in the virtualized environment according to a machine learning program code input by a user; extracting metadata from the machine learning program code, the metadata for rendering an execution environment of the machine learning task; storing the metadata in a first storage space. According to the technical scheme, in the training process of the target machine learning task, relevant metadata required when a specific training environment is reproduced is automatically extracted, when other developers want to reproduce the specific training environment, the specific training environment is reproduced according to the stored metadata, and propagation of the model is accelerated.

Description

Method and device for extracting metadata of machine learning task
Technical Field
The present application relates to the field of cloud computing, and more particularly, to a method, an apparatus, and a computer-readable storage medium for extracting metadata of a machine learning task.
Background
Machine Learning (ML) is a cross-discipline in multiple fields, and is used for specially researching computer simulation or realization of human learning behaviors to acquire new knowledge or skills and reorganizing an existing knowledge structure to continuously improve self performance. Machine learning is the core of artificial intelligence, is the fundamental approach for making computers have intelligence, and is applied to all fields of artificial intelligence.
The workflow of the machine learning task may include an environment building process, a model training process, and a model reasoning process. After a source developer has trained a model through the above process, the other developers are provided with the model that they have trained. Other developers want to reproduce this training process and require a fully reproduced source development environment. However, other developers need to spend a lot of time building and debugging a training environment compatible with the target machine learning task in the process of reproducing the source development environment, which brings great inconvenience to the propagation of the model.
Disclosure of Invention
The application provides a method and a device for extracting metadata in a machine learning task, a developer can automatically extract some related metadata required for reproducing a specific training environment in the training process of a target machine learning task, and when other developers want to reproduce a specific training environment, the developer can reproduce the specific training environment according to the stored related metadata, so that the propagation of a model is accelerated.
In a first aspect, a method for extracting metadata in a machine learning task is provided, and the method is applied to a virtualization environment, and the method includes: running a machine learning task in the virtualized environment according to a machine learning program code input by a user; extracting metadata from the machine learning program code, the metadata for rendering an execution environment of the machine learning task; storing the metadata in a first storage space.
In one possible implementation, the metadata is extracted from the machine learning program code by way of a keyword search according to the type of the metadata.
In another possible implementation, the virtualization environment runs the machine learning task through at least one training container, and the metadata includes a first type of metadata. The first type metadata may be extracted from an input training container start script according to a type of the first type metadata, and the training container start script is used to start the at least one training container.
In another possible implementation, the type of the first type of metadata includes any one or more of: a framework used by the machine learning task, a model used by the machine learning task, and a dataset used in a training process of the machine learning task.
In another possible implementation, the virtualization environment runs the machine learning task through at least one training container, and the metadata includes a second type of metadata. The metadata may be extracted from input training program code according to a type of the second type of metadata, the training program code being stored in a second storage space mounted on the at least one training container, the training program code being configured to run a model training process of the machine learning task in the at least one training container.
In another possible implementation, the type of the second type of metadata includes any one or more of: a processing mode of a data set used in a training process of the machine learning task, a structure of a model used in the training process of the machine learning task, and a training parameter used in the training process of the machine learning task.
In a second aspect, an apparatus for extracting metadata in a machine learning task, the apparatus running in a virtualized environment, the apparatus comprising:
the running module is used for running a machine learning task in the virtualization environment according to a machine learning program code input by a user;
a metadata extraction module, configured to extract metadata from the machine learning program code, where the metadata is used to reproduce an operating environment of the machine learning task;
the metadata extraction module is further configured to store the metadata in a first storage space.
In a possible implementation manner, the metadata extraction module is specifically configured to: and extracting the metadata from the machine learning program codes according to the type of the metadata in a keyword search mode.
In another possible implementation, the virtualization environment runs the machine learning task through at least one training container, the metadata including a first type of metadata;
the metadata extraction module is specifically configured to: and extracting the first type metadata from an input training container starting script according to the type of the first type metadata, wherein the training container starting script is used for starting the at least one training container.
In another possible implementation, the type of the first type of metadata includes any one or more of: a framework used by the machine learning task, a model used by the machine learning task, and a dataset used in a training process of the machine learning task.
In another possible implementation, the virtualization environment runs the machine learning task through at least one training container, the metadata including a second type of metadata;
the metadata extraction module is specifically configured to: extracting the metadata from input training program codes according to the type of the second type of metadata, wherein the training program codes are stored in a second storage space mounted in the at least one training container, and the training program codes are used for running a model training process of the machine learning task in the at least one training container.
In another possible implementation, the type of the second type of metadata includes any one or more of: a processing mode of a data set used in a training process of the machine learning task, a structure of a model used in the training process of the machine learning task, and a training parameter used in the training process of the machine learning task.
In a third aspect, a system for extracting metadata in a machine learning task is provided, where the system includes at least one server, each server includes a memory and at least one processor, the memory is used for program instructions, and when the at least one server runs, the at least one processor executes the program instructions in the memory to perform the method in the first aspect or any one of the possible implementations of the first aspect, or to implement a running module, a metadata extraction module, in the second aspect or any one of the possible implementations of the second aspect.
In one possible implementation, the execution module may be executed on the plurality of servers, and the metadata extraction module may be executed on each of the plurality of servers.
In another possible implementation, the metadata extraction module may run on a portion of the plurality of servers.
In another possible implementation, the metadata extraction module may run on any other server than the plurality of servers described above.
Alternatively, the processor may be a general-purpose processor, and may be implemented by hardware or software. When implemented in hardware, the processor may be a logic circuit, an integrated circuit, or the like; when implemented in software, the processor may be a general-purpose processor implemented by reading software code stored in a memory, which may be integrated with the processor, located external to the processor, or stand-alone.
In a fourth aspect, a non-transitory readable storage medium is provided, which includes program instructions, when the program instructions are executed by a computer, the computer performs the method according to the first aspect and any one of the possible implementation manners of the first aspect.
In a fifth aspect, a computer program product is provided, which comprises program instructions, which when executed by a computer, perform the method according to the first aspect and any one of the possible implementations of the first aspect.
The present application can further combine to provide more implementations on the basis of the implementations provided by the above aspects.
Drawings
Fig. 1 is a schematic block diagram of an apparatus 100 for executing a machine learning task according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of performing a machine learning task according to an embodiment of the present application.
Fig. 3 is a schematic block diagram of a container environment 300 provided by an embodiment of the present application.
Fig. 4 is a schematic flow chart of a method for extracting metadata by a metadata extraction module according to an embodiment of the present application.
Fig. 5 is a schematic block diagram of a system 500 for extracting metadata in a machine learning training process according to an embodiment of the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
Machine Learning (ML) is a cross-discipline in multiple fields, and is used for specially researching computer simulation or realization of human learning behaviors to acquire new knowledge or skills and reorganizing an existing knowledge structure to continuously improve self performance. Machine learning is the core of artificial intelligence, is the fundamental approach for making computers have intelligence, and is applied to all fields of artificial intelligence. The workflow of the machine learning task may include an environment building process, a model training process, and a model reasoning process.
Fig. 1 is a schematic block diagram of an apparatus 100 for executing a machine learning task according to an embodiment of the present application. The apparatus 100 may include a second storage space on which the execution module 110, the metadata extraction module 120, and the metadata extraction module 120 are mounted. The above modules are described in detail below.
The operation module 110 may include a plurality of sub-modules, such as: an environment building submodule 111, a training submodule 112, an inference submodule 113 and an environment destroying submodule 114.
It should be understood that the running module 110, the metadata extraction module 120, and sub-modules thereof may run in a virtualized environment, for example, may be implemented by using a container, and for example, may also be implemented by using a virtual machine, which is not specifically limited in this embodiment of the present application.
(1) Environment construction submodule 111:
the environment construction submodule 111 is used for constructing a training environment of the machine learning task. The establishment of the machine learning task environment is actually the scheduling of computer hardware resources, which may include but are not limited to: computing resources, storage resources.
With the machine learning task becoming more and more complex and the calculation amount becoming more and more, the cloud and containerization of the machine learning task becomes a development trend. Container technology, represented by docker, matures to create a virtualized runtime environment using mirroring. Related components may be deployed in the container. The container technology provides computing resources and storage resources, so that the computing resources and the storage resources on the physical computer can be directly called, and hardware resources are provided for machine learning tasks. For example, an open source container scheduling platform represented by kubernets can efficiently manage containers.
It should be appreciated that docker is an open source application container engine that allows source developers to package their applications and dependencies into a portable container and then distribute them to any popular Linux machine, as well as to implement virtualization. For convenience of description, the technical solutions provided in the embodiments of the present application are described in detail below by taking a container as an example. When the virtualization environments in which the running module 110 and the metadata extraction module 120 are located are virtual machines, the running module 110, the metadata extraction module 120, and sub-modules thereof may be implemented by virtual machines.
The embodiment of the present application does not specifically limit the computing resources. May be a Central Processing Unit (CPU) or may also be a Graphics Processing Unit (GPU).
In particular, the source developer can pull container images of packaged related components, such as container images of training components, into the container environment by way of the package images. And creating and starting a training container through a command line or a container starting script input by a source developer, and performing a training process of the model in the training container.
(2) Training submodule 112:
the training submodule 112 may operate in the container environment constructed as described above, and perform a training process of the model according to the training program code input by the source developer.
Specifically, the source developer may store the training program code in the first storage space 115 in a manner of Network File System (NFS) shared storage or other storage products on the cloud platform, for example, Distributed File System (DFS), and the first storage space 115 may be mounted in the started training container. The training sub-module 112 may train the model according to the training program code stored in the first storage space 115.
The training sub-module 112 may also store the trained models in the first memory space 115 during the training process.
(3) The inference submodule 113:
the inference sub-module 113 may access the first memory space 115 and may perform an inference process based on the trained models stored in the first memory space 115. In particular, the inference sub-module 113 may determine a predicted output value based on input training data and the trained model. And may determine whether the model trained by the training submodule 112 is correct according to an error between the predicted output value and prior knowledge (prior knowledge) of the training data.
It should be understood that the a priori knowledge, also referred to as a true value (ground route), generally includes a predicted result corresponding to training data provided by a person.
For example, machine learning tasks are applied in the field of image recognition. The training data input by the model trained by the training sub-module 112 is pixel information of an image, and the priori knowledge corresponding to the training data is that the label of the image is "dog". Inputting training data with the label of the image as 'dog' into a trained model, and judging whether a predicted value output by the model is 'dog'. If the output of the model is "dog", it can be determined that the model can be predicted accurately.
(4) Environmental destruction submodule 114:
after the training process is completed, the environment destruction sub-module 114 may destroy the created container environment. However, the first storage space 115 is not destroyed, and the trained models are stored in the first storage space 115, so that the inference submodule 113 performs the inference process according to the stored trained models.
(5) The metadata extraction module 120:
the metadata extraction module 120 may automatically extract metadata from the machine learning program code input by the source developer during the machine learning task performed by the execution module 110, and the metadata may be used for reproducing the execution environment of the machine learning task.
The metadata extraction module 120 may also generate a description file from the extracted metadata and store the generated description file in the second storage space 121. When other source developers want to reproduce the running environment of the machine learning task, the stored description file is obtained from the second storage space 121, and the development environment is directly configured and debugged according to some related metadata included in the description file, so that the target training environment is reproduced, and the propagation of the model is accelerated.
In order to reproduce the training environment of a specific machine learning task, the source developer in the prior art usually provides one or more of the following three description standards of related metadata, such as a deep learning framework (framework) selected by the source developer, a model (model) used by the source developer, and a data set (dataset) used by the source developer. The following describes some of the above metadata in detail with reference to tables 1 to 3.
Table 1 framework (framework)
Properties Type (B) Description of the invention
Name (name) string Name of deep learning framework selected by source developer
Version (version) string Version of deep learning framework selected by source developer
As shown in table 1, the framework of deep learning may include, but is not limited to: tensor flow (tensor flow), Convolutional Neural Network Framework (CNNF), convolutional structure for fast feature embedding (cafe).
It should be understood that tensoflow can support deep reinforcement learning and even other computationally intensive scientific calculations (such as partial differential equation solving, etc.) in addition to common network structures, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs).
Table 2 model (model)
Properties Type (B) Description of the invention
Name (name) string Model names used by source developers
Version (version) string Model versions used by source developers
Source (source) string Source of model used by source developer
Filename (file) object Model filename for use by source developer
Author (creator) string Author of model used by source developer
Time (time) ISO-8601 Creation time of model used by source developer
As shown in Table 2, the model used by the source developer may include, but is not limited to: image recognition models, character recognition models, and the like.
It should be noted that the model used by the source developer may be a public model or a private model. If the source developer uses a public model, the public model provides Uniform Resource Location (URL) links.
It should also be noted. The file name of the model used by the source developer is not directly stored in the description file of the metadata, and the file name of the model can be packaged with the metadata description file in the description form of the file name. If the model used by the source developer is a public model, the metadata description file is a URL link.
TABLE 3 data set (dataset)
Properties Type (B) Description of the invention
Name (name) string Name of data set used by source developer
Version (version) string Versions of data sets used by source developers
Source (source) string Source of data set used by source developer
As shown in Table 3, URL links of the data set used by the source developer or a compressed file of the data set itself may be packaged with the metadata description file.
Referring to tables 1-3, the above-mentioned framework, model, data set, and other metadata are typically determined by the source developer by way of container mirroring of the package training component in the environment construction sub-module 111. Taking the container mirror of another markup language (YAML) file packaging training component written and started by a source developer as an example, when the source developer writes and starts a YAML file, the input program code includes critical metadata such as a frame (frame), a model (model), a data set (dataset) and the like selected and used by the source developer.
However, it is difficult to reproduce a training environment for a machine learning task by only relying on the provided metadata in tables 1-3. The embodiment of the present application further provides one or more of the following three description standards of related metadata, for example, a processing manner (data-process) of a data set used by a source developer, a structure (model-architecture) of a model used by the source developer, and training parameters (training-parameters) used by the source developer in a training process. The following describes the above metadata in detail with reference to tables 4 to 6.
TABLE 4 data set handling (data-Process)
Figure BDA0001999807190000061
Referring to table 4, the source developer defined data set may be divided in a manner of processing the input data set, for example, a part of the input data set is used in a training process of the model, that is, the part of the data set may be used as training data in a model training process. A portion of the input data set is used in the model inference process, that is, the portion of the data set can be used as test data in the model inference process.
Structure of model 5 (model-architecture)
Figure BDA0001999807190000062
Figure BDA0001999807190000071
TABLE 6 training parameters (training-params)
Figure BDA0001999807190000072
Referring to tables 4-6, one or more of the above-described metadata such as dataset processing, model structure, training parameters, etc. are typically hidden in the training program code stored by the source developer in the first storage space 150.
In the embodiment of the application, the metadata shown in tables 1 to 6 are automatically obtained in the process of performing the machine learning task by the running module 110. And directly configuring and debugging the development environment according to the metadata, thereby reproducing the training environment of the target machine learning task.
According to the description standards of metadata shown in tables 1 to 6, metadata having 6 items in total is required to be extracted, that is, a frame (frame) for deep learning selected by a source developer, a model (model), a dataset (dataset), a data-process for processing the dataset (data-process), a model-architecture (model-architecture), and a training-parameter (training-params) used in a training process. Since different metadata are determined in different ways, the specific implementation manner for extracting the metadata of the above 6 items is also different.
Taking the example of extracting metadata such as the frames, models, datasets, etc. shown in tables 1-3, the metadata is stored on the physical host that starts the training container, as it is typically determined by the source developer at the time of packaging the container image of the training component. Accordingly, the metadata extraction module 120 may obtain the metadata as shown in tables 1-3 stored on the physical host by sending a query command to the physical host.
Taking the extraction of metadata such as the data set processing method, the model structure, the training parameters, and the like shown in tables 4 to 6 as an example, since the metadata is determined after the training container is created and started and the source developer includes the metadata in the training program code stored in the storage space mounted to the training container, the metadata extraction module 120 may obtain the metadata shown in tables 4 to 6 by accessing the training program code stored in the storage space mounted to the training container.
In this embodiment, the metadata extraction module 120 may extract the above several types of metadata according to a keyword search manner. The complete flow chart of the machine learning task provided by the embodiment of the present application is described in detail below with reference to fig. 2 to 3.
Referring to fig. 2, a complete flow chart of the machine learning task may include an environment building process, a training process, and an inference process, which are described in detail below.
(1) And (3) an environment building process:
step 210: the source developer packs the mirror images of the training components and the mirror images of the metadata extraction module.
The source developer, when packaging the training component image, determines the metadata for the framework, model, dataset, etc. as shown in tables 1-3.
In the case where the resource scheduling platform is kubernets, the training component may be jupyter notebook. The jupyter notebook is an interactive web application, and a source developer can input and adjust the training program code of the model on line by using the jupyter notebook.
Step 215: the source developer starts the container mirror.
The source developer may deposit the training component image packaged in step 210, the image of the metadata extraction module 120, in a container store. It should be understood that the container warehouse may manage, store, and protect container images. For example, the container repository may be a container registry (container registry).
The source developer may enter a container start script or command line to pull into the container environment from different versions of the container image in the container repository, starting the corresponding component in the container. For example, the training component is run in a training container and the metadata extraction module 120 is run in an extraction container.
It should be understood that different versions of a container image may correspond to different frames, models, datasets, etc. metadata.
It should also be understood that the container start script or command line may include information such as the name, version, time of start of the container image, etc. of the pulled container image.
Specifically, referring to the container environment 300 in fig. 3, the container group 310 providing the training function may include a training container and an extraction container, the training container mounts the first storage space 115, and the extraction container mounts the second storage space 121.
In the case where the resource scheduling platform is kubernets, the group of containers may be referred to as pods (pod). A pod is the smallest unit of scheduling in kubernets, and a pod may include a plurality of containers in one pod. For a pod, it may run on some physical host, and when scheduling is needed, kubernets will schedule the pod as a whole.
The storage space on which the container is mounted may be a Persistent Volume (PV) in kubernets, which is a segment of network storage area allocated by a network administrator. The PV has a life cycle that is independent of any single pod, that is, after the life cycle of a pod is over, the container within the pod will be destroyed, but the PV mounted to the container within the pod will not be destroyed.
(2) Training process:
step 220: the source developer inputs the training program code.
The source developer may enter the training program code via a training component (e.g., jupyter notebook) running in the training container in accordance with the metadata description criteria shown in tables 1-6. The training program code includes metadata such as a data set processing method, a model structure, and training parameters shown in tables 4 to 6.
The training program code entered may be stored in the first memory space 115 on which the training container is mounted.
It should be noted that, in the process of model training, if the input training program code needs to be modified, the training program code of the model may be input and adjusted online through jupitter notebook.
After the model training process is finished, the trained model is also stored in the first storage space mounted by the training container.
Step 225: the metadata extraction module 120 extracts metadata and stores it in the second storage space 121 on which the extraction container is mounted.
The metadata extraction module 120 operating in the extraction container extracts the metadata according to the keyword extraction manner and through the metadata description criteria shown in tables 1 to 6.
Since different metadata are determined in different ways, the specific implementation manner for extracting the metadata of the above 6 items is also different. One possible implementation is that the metadata extraction module 120 extracts the metadata of the framework, model, data set, etc. shown in tables 1-3 from the container start script, command line input by the source developer. Another possible implementation is that the metadata extraction module 120 extracts metadata such as the data set processing method, the model structure, the training parameters, etc. shown in tables 4-6 from the training program code stored in the first storage space 115 by the source developer. Specifically, refer to the description in fig. 4, which is not repeated herein.
It should be noted that when the model training task is finished, the pod providing the training function is destroyed, but the mounted first storage space 115 and the mounted second storage space 121 are not destroyed.
(3) And (3) reasoning process:
step 230: the container mirror of the inference component, metadata extraction module 120, is initiated.
The process of creating and starting the inference container image and the container image of the metadata extraction module 120 corresponds to step 215, and for details, reference is made to the description in step 215, which is not described herein again.
Step 235: the inference container performs inference services according to the trained models.
Specifically, referring to the container environment 300 of FIG. 3, a set of containers 320 that provide inference functionality can include an inference container and an extraction container. The first storage space 115 with the training container mounted thereon may be re-mounted to the inference container, and the second storage space 121 with the extraction container mounted thereon in the container group with the training function may be re-mounted to the extraction container in the container group with the inference function provided thereon.
The inference container may perform inference according to the trained model stored in the mounted first storage space 115, and the extraction container in the container group providing the inference function may further acquire metadata that may be generated in the inference process, and store the metadata in the mounted second storage space 121.
The process of extracting metadata by the metadata extraction module 120 operating in the extraction container is described in detail below with reference to the example in fig. 4.
Fig. 4 is a schematic flow chart of a method for extracting metadata by the metadata extraction module 120 according to an embodiment of the present application. The method shown in FIG. 4 may include steps 410-420, and the steps 410-420 are described in detail below.
It should be understood that the metadata extraction module 120 shown in fig. 1 may be divided into two parts, i.e., a first metadata extraction module and a second metadata extraction module, according to the type of extracted metadata.
The first metadata extraction module may be configured to extract metadata from the physical hosts, such as the frames, models, datasets, etc., as shown in tables 1-3, that the source developer determined when packaging container images of the training components. The second metadata extraction module may be configured to extract metadata such as a data set processing method, a model structure, and training parameters shown in tables 4 to 6 from training program code stored in a storage space mounted to a training container by a source developer.
Optionally, in some embodiments, the resource scheduling platform is kubernets, and the first metadata extraction module may be a job extractor (job extractor), which is a kubecect command line. The second metadata extraction module is a code extractor (code extractor). For convenience of description, the resource scheduling platform is described as kubernets as an example.
Step 410: the first metadata extraction module sends a query command to the physical host side to extract metadata such as frames, models, datasets and the like as shown in tables 1-3.
Metadata for the framework, model, dataset, etc. as shown in tables 1-3 has been determined by the source developer by way of the container image of the package training component and the container image is deposited in the container repository. The source developer may input a container start script or command line to pull different versions of container images from the container repository, the different versions of container images corresponding to different frames, models, datasets, and other metadata.
Since metadata such as frames, models, datasets, etc. are stored on the physical hosts, the job extrator needs to access an external service to acquire metadata such as frames, models, datasets, etc. stored on the physical hosts. In the embodiment of the present application, a gateway (e.g., an egress (egress)) may be configured, so that a first metadata extraction module (e.g., a jobextract) can access an Internet Protocol (IP) address to a physical host through the egress (egress), and obtain metadata such as a frame, a model, and a data set by sending a query command line.
In the case that the resource scheduling platform is kubecenetes, a command line of "kubeclcect" may be sent to dynamically extract relevant metadata information, such as name and version of a container mirror image, time of starting the container mirror image, a frame, a model, a data set, and other metadata, from a container start script and a command line on the physical host side according to a keyword extraction manner. And is stored in the second storage space 121 on which it is mounted in the form of a java script object symbol (JSON) format or other file format.
Step 420: the second metadata extraction module extracts metadata such as the data set processing method, the model structure, the training parameters, and the like shown in tables 4 to 6 from the first storage space 115 mounted on the training container.
Metadata such as a data set processing method, a model structure, and training parameters shown in tables 4 to 6 have been stored in the first storage space 115 of the training container mount by the source developer, and thus, the second metadata extraction module (e.g., codextractor) can extract metadata such as a data set processing method, a model structure, and training parameters shown in tables 4 to 6 from the training program code stored in the first storage space 115 of the training container mount by means of keyword search in accordance with the description standards of metadata shown in tables 4 to 6. And stored in the second storage space 121 where it is mounted in the form of JSON format or other file format.
The code extrator and the job extrator may integrate the extracted metadata after extracting the corresponding metadata, and store the integrated metadata in the second storage space 121 in the form of "metadata description file + model + dataset".
In the embodiment of the application, a source developer can automatically acquire and store metadata used by the source developer in the machine learning task through the metadata extraction module in the process of environment building or model training through the workflow of the machine learning task, wherein the metadata is shown in tables 1 to 6. After the machine learning task is finished, if a source developer or other developers need to reproduce the source development environment, the workflow construction of the whole life cycle of the machine learning task can be realized through the stored metadata, so that the source development environment is reproduced.
Fig. 5 is a schematic block diagram of a system 500 for extracting metadata in a machine learning training process according to an embodiment of the present application, where the system 500 may include at least one server.
For convenience of description, server 510 and server 520 are illustrated in fig. 5 as an example. Server 510 and server 520 are similarly structured.
The execution module 110 shown in fig. 1 may be executed on at least one server, for example, the execution module 110 is executed on the server 510 and the server 520 respectively.
The metadata extraction module 120 shown in fig. 1 may be deployed in various forms, and this is not particularly limited in this embodiment of the present application. As one example, the metadata extraction module 120 may be run on each of at least one server, e.g., server 510 and server 520 with the metadata extraction module 120 running thereon, respectively. As another example, the metadata extraction module 120 may also run on a portion of at least one server, e.g., the metadata extraction module 120 runs on the server 510 or on the server 520. As another example, the metadata extraction module 120 may also run on a server other than the at least one server described above, e.g., the metadata extraction module 120 runs on the server 530.
The system 500 may perform the above method for extracting metadata in the machine learning training process, and in particular, at least one server in the system 500 may include at least one processor and a memory. The memory is used for storing program instructions, and the processor included in the at least one server may execute the program instructions stored in the memory to implement the method for extracting metadata in the machine learning training process, or implement the execution module 110 and the metadata extraction module 120 shown in fig. 1. The specific process of the server 510 implementing the above method for extracting metadata in the machine learning training process is described in detail below by taking the server 510 as an example.
The server 510 may include: at least one processor (e.g., processor 511, processor 516), memory 512, communication interface 513, input output interface 514.
Wherein the at least one processor may be coupled to the memory 512. The memory 512 may be used to store program instructions. The memory 512 may be a storage unit inside the at least one processor, may be an external storage unit independent of the at least one processor, or may be a component including a storage unit inside the at least one processor and an external storage unit independent of the at least one processor.
The memory 512 may be a Solid State Drive (SSD), a Hard Disk Drive (HDD), a read-only memory (ROM), a Random Access Memory (RAM), or the like.
Optionally, server 510 may also include a bus 515. The memory 512, the input/output interface 514, and the communication interface 513 may be connected to at least one processor via a bus 515. The bus 515 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus 515 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one line is shown in FIG. 5, but this does not represent only one bus or one type of bus.
Optionally, in some embodiments, system 500 may also include cloud storage 540. Cloud storage 540 may be coupled to system 500 as external storage. The program instructions may be stored in memory 512 or cloud storage 540.
In the embodiment of the present application, at least one processor may be a Central Processing Unit (CPU), and may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. Or one or more integrated circuits are used for executing the relevant programs, so as to implement the technical solutions provided by the embodiments of the present application.
Referring to fig. 5, in the server 510, taking the processor 511 as an example, the running module 110 runs in the processor 511. The operation module 110 may include a plurality of sub-modules, for example, an environment construction sub-module 111, a training sub-module 112, an inference sub-module 113, and an environment destruction sub-module 114 shown in fig. 1.
The first storage space 115 of the memory 512 stores training program code input by the active developer, and the training program code includes one or more of metadata such as data set processing manners, model structures, training parameters, and the like, as described in tables 4 to 6. The second storage space 121 stores therein metadata extracted by the metadata extraction module 120. The third storage space 5121 stores therein a training container start script input by an active developer, where the training container start script includes one or more of the metadata such as the frames, models, data sets, etc. shown in tables 1 to 3.
Processor 511 retrieves stored program instructions from memory 512 to perform the machine learning tasks described above. Specifically, the environment construction sub-module 111 in the operation module 110 obtains the container start script from the third storage space 5121 of the memory 512, and executes the construction process of the container environment. The training submodule 112 in the operation module 110 obtains the training program code from the first storage space 115 of the memory 512 to perform the training process of the model, and may store the training result of the model in the first storage space 115. For a specific implementation process of each sub-module in the operation module 110 executing the machine learning task, please refer to the description in fig. 1, which is not described herein again.
During the operation of the machine learning task, the metadata extraction module 120 may extract one or more metadata such as the data set processing method, the model structure, and the training parameters described in tables 4 to 6 from the training program code stored in the first storage space 115 of the memory 512. The metadata extraction module 120 may further extract one or more of the frameworks, models, data sets, etc. as shown in tables 1-3 from the container start script stored in the third storage space 5121.
Optionally, in some embodiments, the metadata extraction module 120 may further generate a description file from the extracted metadata and store the generated description file in the second storage space 121 of the memory 512. For a specific process of extracting metadata by the metadata extraction module 120, reference is made to the above description, and details are not repeated here.
It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (11)

1. A method of extracting metadata in a machine learning task, the method being applied to a virtualized environment, the method comprising:
running a machine learning task in the virtualized environment according to a machine learning program code input by a user, wherein the virtualized environment runs the machine learning task through at least one training container;
extracting metadata from the machine learning program code, the metadata for rendering an execution environment of the machine learning task, the metadata including a first type of metadata;
storing the metadata in a first storage space;
the extracting metadata from the machine learning program code comprises: extracting the metadata from the machine learning program code according to the type of the metadata in a keyword searching mode;
extracting the metadata from the machine learning program code according to the type of the metadata in a keyword search mode, wherein the extracting of the metadata comprises: and extracting the first type metadata from an input training container starting script according to the type of the first type metadata, wherein the training container starting script is used for starting the at least one training container.
2. The method of claim 1, wherein the type of the first type of metadata comprises any one or more of: a framework used by the machine learning task, a model used by the machine learning task, and a dataset used in a training process of the machine learning task.
3. The method of claim 1 or 2, wherein the virtualization environment runs the machine learning task through at least one training container, the metadata comprising a second type of metadata;
extracting the metadata from the machine learning program code according to the type of the metadata in a keyword search mode, wherein the extracting of the metadata comprises:
extracting the metadata from input training program codes according to the type of the second type of metadata, wherein the training program codes are stored in a second storage space mounted in the at least one training container, and the training program codes are used for running a model training process of the machine learning task in the at least one training container.
4. The method of claim 3, wherein the type of the second type of metadata comprises any one or more of: a processing mode of a data set used in a training process of the machine learning task, a structure of a model used in the training process of the machine learning task, and a training parameter used in the training process of the machine learning task.
5. An apparatus to extract metadata in a machine learning task, the apparatus running in a virtualized environment, the apparatus comprising:
the running module is used for running a machine learning task in the virtualization environment according to a machine learning program code input by a user, and the virtualization environment runs the machine learning task through at least one training container;
a metadata extraction module, configured to extract metadata from the machine learning program code, where the metadata is used for rendering an operating environment of the machine learning task, and the metadata includes a first type of metadata;
the metadata extraction module is further used for storing the metadata in a first storage space;
the metadata extraction module is specifically configured to: extracting the metadata from the machine learning program code according to the type of the metadata in a keyword searching mode;
the metadata extraction module is specifically configured to: and extracting the first type metadata from an input training container starting script according to the type of the first type metadata, wherein the training container starting script is used for starting the at least one training container.
6. The apparatus of claim 5, wherein the type of the first type of metadata comprises any one or more of: a framework used by the machine learning task, a model used by the machine learning task, and a dataset used in a training process of the machine learning task.
7. The apparatus of claim 5 or 6, wherein the virtualization environment runs the machine learning task through at least one training container, and wherein the metadata comprises a second type of metadata;
the metadata extraction module is specifically configured to:
extracting the metadata from input training program codes according to the type of the second type of metadata, wherein the training program codes are stored in a second storage space mounted in the at least one training container, and the training program codes are used for running a model training process of the machine learning task in the at least one training container.
8. The apparatus of claim 7, wherein the type of the second type of metadata comprises any one or more of: a processing mode of a data set used in a training process of the machine learning task, a structure of a model used in the training process of the machine learning task, and a training parameter used in the training process of the machine learning task.
9. A system to extract metadata in a machine learning task, the system comprising at least one server, each server comprising a memory for program instructions and at least one processor to execute the program instructions in the memory to perform the method of any of claims 1 to 4.
10. A non-transitory readable storage medium including program instructions that, when executed by a computer, the computer performs the method of any one of claims 1 to 4.
11. A computer program product comprising program instructions which, when executed by a computer, cause the computer to perform the method of any one of claims 1 to 4.
CN201910208590.XA 2019-03-19 2019-03-19 Method and device for extracting metadata of machine learning task Active CN110058922B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910208590.XA CN110058922B (en) 2019-03-19 2019-03-19 Method and device for extracting metadata of machine learning task
PCT/CN2020/070577 WO2020186899A1 (en) 2019-03-19 2020-01-07 Method and apparatus for extracting metadata in machine learning training process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910208590.XA CN110058922B (en) 2019-03-19 2019-03-19 Method and device for extracting metadata of machine learning task

Publications (2)

Publication Number Publication Date
CN110058922A CN110058922A (en) 2019-07-26
CN110058922B true CN110058922B (en) 2021-08-20

Family

ID=67317220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910208590.XA Active CN110058922B (en) 2019-03-19 2019-03-19 Method and device for extracting metadata of machine learning task

Country Status (2)

Country Link
CN (1) CN110058922B (en)
WO (1) WO2020186899A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110058922B (en) * 2019-03-19 2021-08-20 华为技术有限公司 Method and device for extracting metadata of machine learning task
CN112395039B (en) * 2019-08-16 2024-01-19 北京神州泰岳软件股份有限公司 Method and device for managing Kubernetes cluster
CN110532098B (en) * 2019-08-30 2022-03-08 广东星舆科技有限公司 Method and system for providing GPU (graphics processing Unit) service
CN110795141B (en) * 2019-10-12 2023-10-10 广东浪潮大数据研究有限公司 Training task submitting method, device, equipment and medium
CN110837896B (en) * 2019-11-22 2022-07-08 中国联合网络通信集团有限公司 Storage and calling method and device of machine learning model
CN111160569A (en) * 2019-12-30 2020-05-15 第四范式(北京)技术有限公司 Application development method and device based on machine learning model and electronic equipment
US11599357B2 (en) * 2020-01-31 2023-03-07 International Business Machines Corporation Schema-based machine-learning model task deduction
CN111629061B (en) * 2020-05-28 2023-01-24 苏州浪潮智能科技有限公司 Inference service system based on Kubernetes
CN111694641A (en) * 2020-06-16 2020-09-22 中电科华云信息技术有限公司 Storage management method and system for container application
TWI772884B (en) * 2020-09-11 2022-08-01 英屬維爾京群島商飛思捷投資股份有限公司 Positioning system and method integrating machine learning positioning model
CN112286682A (en) * 2020-10-27 2021-01-29 上海淇馥信息技术有限公司 Machine learning task processing method, device and equipment based on distributed cluster
CN112311605B (en) * 2020-11-06 2023-12-22 北京格灵深瞳信息技术股份有限公司 Cloud platform and method for providing machine learning service
CN112819176B (en) * 2021-01-22 2022-11-08 烽火通信科技股份有限公司 Data management method and data management device suitable for machine learning
US20230289276A1 (en) * 2022-03-14 2023-09-14 International Business Machines Corporation Intelligently optimized machine learning models

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899141A (en) * 2015-06-05 2015-09-09 北京航空航天大学 Test case selecting and expanding method facing network application system
CN107451663A (en) * 2017-07-06 2017-12-08 阿里巴巴集团控股有限公司 Algorithm assembly, based on algorithm assembly modeling method, device and electronic equipment
CN108665072A (en) * 2018-05-23 2018-10-16 中国电力科学研究院有限公司 A kind of machine learning algorithm overall process training method and system based on cloud framework
CN108805282A (en) * 2018-04-28 2018-11-13 福建天晴在线互动科技有限公司 Deep learning data sharing method, storage medium based on block chain mode
CN109146084A (en) * 2018-09-06 2019-01-04 郑州云海信息技术有限公司 A kind of method and device of the machine learning based on cloud computing

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8296727B2 (en) * 2005-10-14 2012-10-23 Oracle Corporation Sub-task mechanism for development of task-based user interfaces
US10282171B2 (en) * 2015-03-30 2019-05-07 Hewlett Packard Enterprise Development Lp Application analyzer for cloud computing
CN109272116A (en) * 2018-09-05 2019-01-25 郑州云海信息技术有限公司 A kind of method and device of deep learning
CN110058922B (en) * 2019-03-19 2021-08-20 华为技术有限公司 Method and device for extracting metadata of machine learning task

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899141A (en) * 2015-06-05 2015-09-09 北京航空航天大学 Test case selecting and expanding method facing network application system
CN107451663A (en) * 2017-07-06 2017-12-08 阿里巴巴集团控股有限公司 Algorithm assembly, based on algorithm assembly modeling method, device and electronic equipment
CN108805282A (en) * 2018-04-28 2018-11-13 福建天晴在线互动科技有限公司 Deep learning data sharing method, storage medium based on block chain mode
CN108665072A (en) * 2018-05-23 2018-10-16 中国电力科学研究院有限公司 A kind of machine learning algorithm overall process training method and system based on cloud framework
CN109146084A (en) * 2018-09-06 2019-01-04 郑州云海信息技术有限公司 A kind of method and device of the machine learning based on cloud computing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Demo3-保存训练后模型;肖建军;《https://developer.aliyun.com/article/674117》;20180705;第1-2页 *
机器学习在网络空间安全研究中的应用;张蕾 等;《计算机学报》;20180305;第1-35页 *

Also Published As

Publication number Publication date
WO2020186899A1 (en) 2020-09-24
CN110058922A (en) 2019-07-26

Similar Documents

Publication Publication Date Title
CN110058922B (en) Method and device for extracting metadata of machine learning task
US11126448B1 (en) Systems and methods for using dynamic templates to create application containers
US11113475B2 (en) Chatbot generator platform
US20230057335A1 (en) Deployment of self-contained decision logic
US11172022B2 (en) Migrating cloud resources
US10148757B2 (en) Migrating cloud resources
US11922195B2 (en) Embeddable notebook access support
CN106951451A (en) A kind of webpage content extracting method, device and computing device
US11816456B2 (en) Notebook for navigating code using machine learning and flow analysis
JP6903755B2 (en) Data integration job conversion
US20220036175A1 (en) Machine learning-based issue classification utilizing combined representations of semantic and state transition graphs
CN112328301B (en) Method and device for maintaining consistency of operating environments, storage medium and electronic equipment
US20210158131A1 (en) Hierarchical partitioning of operators
US11853749B2 (en) Managing container images in groups
CN111782181A (en) Code generation method and device, electronic equipment and storage medium
WO2020038376A1 (en) Method and system for uniformly performing feature extraction
US11604662B2 (en) System and method for accelerating modernization of user interfaces in a computing environment
US9152458B1 (en) Mirrored stateful workers
CN113835835B (en) Method, device and computer readable storage medium for creating consistency group
KR102132450B1 (en) Method and apparatus for testing javascript interpretation engine using machine learning
US11921608B2 (en) Identifying a process and generating a process diagram
US20230118939A1 (en) Risk Assessment of a Container Build
US20220121714A1 (en) Endpoint identifier for application programming interfaces and web services
WO2024031983A1 (en) Code management method and related device
US20220067502A1 (en) Creating deep learning models from kubernetes api objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211221

Address after: 450046 Floor 9, building 1, Zhengshang Boya Plaza, Longzihu wisdom Island, Zhengdong New Area, Zhengzhou City, Henan Province

Patentee after: Super fusion Digital Technology Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee before: HUAWEI TECHNOLOGIES Co.,Ltd.