CN112800018A - Development system - Google Patents

Development system Download PDF

Info

Publication number
CN112800018A
CN112800018A CN202110018599.1A CN202110018599A CN112800018A CN 112800018 A CN112800018 A CN 112800018A CN 202110018599 A CN202110018599 A CN 202110018599A CN 112800018 A CN112800018 A CN 112800018A
Authority
CN
China
Prior art keywords
distributed
container
proxy server
file system
training data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110018599.1A
Other languages
Chinese (zh)
Other versions
CN112800018B (en
Inventor
李大虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Electronic System Technology Co ltd
Zhongdian Cloud Computing Technology Co ltd
Original Assignee
China Electronic System Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Electronic System Technology Co ltd filed Critical China Electronic System Technology Co ltd
Priority to CN202110018599.1A priority Critical patent/CN112800018B/en
Publication of CN112800018A publication Critical patent/CN112800018A/en
Application granted granted Critical
Publication of CN112800018B publication Critical patent/CN112800018B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • G06F16/1824Distributed file systems implemented using Network-attached Storage [NAS] architecture
    • G06F16/183Provision of network file services by network file servers, e.g. by using NFS, CIFS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/20Software design

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present application provides a development system, the system comprising: the system comprises a service system, a distributed storage proxy server and a distributed file system. Because the distributed proxy servers in the system are configured with various types of transmission protocols, the distributed storage proxy servers can finish the storage and reading of training data based on a distributed file system, and thus, the various types of transmission protocols can share one set of bottom storage (namely the distributed file system), so that the training data does not need to be carried in the process of managing the training data and managing an experimental container by a service system, and the time for training a model in the experimental container by using the training data is further saved; in addition, a user can directly use the business system to create and manage the experiment container, namely, the whole process of modeling (such as writing codes, training models, deploying scripts and the like) can be carried out in the experiment container, and the efficiency of user modeling and deploying is improved.

Description

Development system
Technical Field
The application relates to the field of artificial intelligence, in particular to a development system.
Background
With the development of artificial intelligence, the artificial intelligence is applied to more and more scenes, and the demand for modeling is more and more. The AI engineer is the main force of modeling, but the production efficiency of the AI engineer is limited by the laggard modeling tool and the complicated model deployment at present. Currently, a system platform capable of improving the modeling and deployment efficiency of AI engineers is needed.
Disclosure of Invention
The application provides a development system to realize the in-process of business system management training data and management experiment container, need not carry out the transport of training data, and then saved the time of utilizing the model in the training data training experiment container, and, the user can directly utilize business system establishes, manages the experiment container, can be in the full flow (for example write code, training model, deployment script etc.) that the model building was carried out in the experiment container, improved the efficiency that the user modelled and deployed.
The present application provides a development system, the system comprising: the system comprises a service system, a distributed storage proxy server and a distributed file system; the service system is connected with the distributed storage proxy server, and the distributed proxy server is connected with the distributed file system; wherein the distributed proxy server is configured with multiple types of transmission protocols;
the business system is used for managing training data, and creating and managing an experiment container;
the distributed proxy server is used for storing the training data in the service system into the distributed file system by utilizing the multiple types of transmission protocols, and mounting the data stored in the distributed file system into an experimental container in the service system;
the distributed file system is used for storing the data stored by the distributed proxy server.
Optionally, the service system includes a data management module and a container orchestration engine;
the data management module is used for managing training data;
and the container arranging engine is used for creating and managing the experiment containers.
Optionally, the container orchestration engine is kubernets.
Optionally, the distributed proxy server is configured with two transmission protocols, which are a storage service protocol and a file sharing protocol respectively;
the distributed proxy server is specifically configured to store the training data in the data management module into the distributed file system through the storage service protocol; and mounting the data stored in the distributed file system to an experimental container in the service system through the file sharing protocol.
Optionally, the experimental container is specifically configured to read training data stored in the distributed file system through the file sharing protocol, and perform model training by using the training data to obtain a trained model.
Optionally, the experiment container is specifically configured to upload data to the distributed file system through the file sharing protocol.
Optionally, the storage service protocol includes a simple storage service protocol.
Optionally, the file sharing protocol includes a network file system protocol.
Optionally, the data management module is specifically configured to add, delete, and query training data.
Optionally, the container orchestration engine is configured with a code development tool; the container coding engine is specifically used for creating an experiment container, and inputting codes and deployment scripts and creating, importing and training a model in the experiment container by using the code development tool.
It can be seen from the above technical solutions that the present application provides a development system, the system including: the system comprises a service system, a distributed storage proxy server and a distributed file system; the service system is connected with the distributed storage proxy server, and the distributed proxy server is connected with the distributed file system; wherein the distributed proxy server is configured with multiple types of transmission protocols; the business system is used for managing training data, and creating and managing an experiment container; the distributed proxy server is used for storing the training data in the service system into the distributed file system by utilizing the multiple types of transmission protocols, and mounting the data stored in the distributed file system into an experimental container in the service system; the distributed file system is used for storing the data stored by the distributed proxy server. Therefore, in the application, as the distributed proxy server is configured with multiple types of transmission protocols, the distributed storage proxy server can complete the storage and reading of the training data based on the distributed file system, and thus, the multiple types of transmission protocols can share one set of bottom storage (namely, the distributed file system), so that the training data does not need to be carried in the process of managing the training data and managing the experimental container by the business system, and the time for training the model in the experimental container by using the training data is further saved; in addition, a user can directly use the business system to create and manage the experiment container, namely, the whole process of modeling (such as writing codes, training models, deploying scripts and the like) can be carried out in the experiment container, and the efficiency of user modeling and deploying is improved.
Further effects of the above-mentioned unconventional preferred modes will be described below in conjunction with specific embodiments.
Drawings
In order to more clearly illustrate the embodiments or prior art solutions of the present application, the drawings needed for describing the embodiments or prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and that other drawings can be obtained by those skilled in the art without inventive exercise.
Fig. 1 is a schematic structural diagram of a development system according to an embodiment of the present application;
fig. 2 is a schematic flow chart of modeling an experimental container according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following embodiments and accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The inventor finds that currently, many vendor object storage and container NFS storage are separated, data management needs to upload data to the object storage, but data needed by model training needs to read the NFS storage of the container, which requires to move data in the object storage to the NFS storage, and consumes a lot of network bandwidth and time. Therefore, the application provides a development system, in the application, because the distributed proxy server is configured with multiple types of transmission protocols, the distributed storage proxy server can complete the storage and reading of the training data based on the distributed file system, and thus, the multiple types of transmission protocols can share one set of bottom storage (namely the distributed file system), so that in the process of managing the training data and managing the experimental container by a service system, the carrying of the training data is not needed, and the time for training the model in the experimental container by using the training data is saved; in addition, a user can directly use the business system to create and manage the experiment container, namely, the whole process of modeling (such as writing codes, training models, deploying scripts and the like) can be carried out in the experiment container, and the efficiency of user modeling and deploying is improved.
Various non-limiting embodiments of the present application are described in detail below with reference to the accompanying drawings.
Referring to fig. 1, an embodiment of the present application provides a development system, which may include: the system comprises a service system, a distributed storage proxy server and a distributed file system. The service system is connected with the distributed storage proxy server, and the distributed proxy server is connected with the distributed file system.
In this embodiment, the distributed file system may be understood as a system that can store files, and in particular, the distributed file system may be used to store data stored by the distributed proxy server. In one implementation, the distributed file system may be Ceph, which is a unified distributed file system designed for excellent performance, reliability, and scalability.
The business system can be used for managing training data, and creating and managing experiment containers. It should be noted that, in one implementation, the business system may include a data management module and a container orchestration engine.
The data management module may be understood as a module that may store training data for training a model and may manage the training data, that is, the data management module may be configured to manage the training data. It should be noted that the training data used for training the model may be pictures, texts, audios and their corresponding labeled data, such as a stack of pictures and labeled files of pictures. In an implementation manner, the data management module may be specifically configured to add, delete, and query the training data, that is, the data management module may add, delete, or query the training data stored locally. It should be noted that, in the data management module, each training data has a Uniform Resource Identifier (URI) corresponding to each training data, so that the URL corresponding to the training data can be used for querying.
The container arrangement engine may be understood as a module for creating and managing experiment containers, that is, the container arrangement engine is used for managing containers, for example, managing experiment containers, in the present system, all experiment containers are managed by the container arrangement engine, and the later-mentioned deployment containers are also managed by the container arrangement engine. In one implementation, the container orchestration engine, configured with code development tools, may be configured with base development dependency packages such as code development tools jupiter notebook or Vscode, deep learning package tensffow, jupiter notebook image, Vscode image, idea, and Eclipse development tools; in this way, the container coding engine may be specifically used to create experiment containers and manage experiment containers. It should be noted that, since the experiment container can be understood as a resource-isolated machine, which is equivalent to a development environment, in this embodiment, a code development tool may be configured in the experiment container, so that the experiment container may create a container, and a user may encode in the container; it is to be appreciated that managing an experiment container by a container coding engine may be understood as utilizing the code development tool to enter code, deploy scripts, and create, import, train models in the experiment container.
It should be noted that, in the prior art, at present, many manufacturer code writings are not in a container, but only training containerization is performed, and although the deployment block is containerized, more than all, an AI engineer is required to write a docker file to create a container, but the AI engineer does not know the docker very much, and cannot create a deployment container quickly and conveniently. Therefore, the modeling full flow containerization can be realized, specifically, the modeling full flow in the embodiment is operated in a container, and the method comprises the steps of code compiling, training and deploying, so that the modeling efficiency of an AI engineer is improved. Therefore, a user can directly use the business system to create and manage the experiment container, namely, the full process of modeling (such as writing codes, training models, deploying scripts and the like) can be carried out in the experiment container, and the efficiency of user modeling and deploying is improved.
For example, the container arrangement engine may be kubernets, abbreviated as K8s, an abbreviation that replaces 8 characters "ubernet" with 8; the Kubernetes is an open source and used for managing containerized applications on a plurality of hosts in a cloud platform, aims to make the application of the containerization simple and efficient to deploy (powerfull), and provides a mechanism for deploying, planning, updating and maintaining the applications. Taking mask recognition model development as an example, as shown in fig. 2, a user selects a suitable bottom-layer deep learning frame mirror image and a jupitter notebook mirror image or a Vscode mirror image, and together with related storage, a CPU and a memory, may create an experimental container Y1 based on k8s (i.e., a container arrangement engine), and k8s returns a jupitter notebook entry address and a permission token. Wherein, the storage is the storage mounted by the Ceph through the NFS protocol; moreover, k8s can be used to enter a Jupiter notebook interface or a Vscode interface inside the experimental container Y1 through a Jupiter notebook entry address and an authority token, and relevant mask recognition model deep learning codes are directly written on the two previous IDE; then, a training script can be written by using k8s and is mounted to an experiment container Y1; next, the experimental container Y1 was packed into algorithm image a1 using k8 s; then, the resource data related to the GPU, the CPU and the memory can be selected by using k8s, and automatic parameter adjustment or manual parameter adjustment is selected at the same time; thus, based on the algorithm image a1, a training container or training containers can be created with k8s, and training is performed, which generates one or more models; finally, a model M1 with proper accuracy can be selected from the training models by using k8s according to the mask recognition model recognition accuracy, default resources are selected based on a proper bottom-layer deep learning frame mirror image and a Jupitter notebook mirror image or a Vscode mirror image, a deployment container Y2 is created based on k8s, then a deployment script can be written in a deployment container Y2 by using k8s, and the deployment container Y2 is packaged into a deployment mirror image A2 to form a standardized deployment mirror image component, so that the standardized deployment mirror image component can be flexibly used on a third-party platform; finally, deployment mirror a2 may be launched with k8s, forming container Y3 that may provide mask identification services; thus, a pedestrian wearing a mask X1 can be uploaded by k8s, a request can be sent to the container Y3 interface, and a new picture X2 indicating the position of the mask can be returned, along with mask identification coordinates and a flag indicating whether the mask is worn or not.
In this embodiment, the distributed proxy server may be configured with multiple types of transmission protocols, for example, in an implementation manner, the distributed proxy server may be configured with two types of transmission protocols, which are a Storage Service protocol and a file sharing protocol, respectively, for example, the Storage Service protocol may include a Simple Storage Service (S3) protocol; the file sharing protocol includes a Network File System (NFS) protocol.
It should be noted that the S3 protocol is theoretically a global Storage Area Network (SAN), which represents an oversized hard disk in which you can store and retrieve digital assets, and assets that can be stored and retrieved by the S3 protocol can be referred to as objects, and the objects are stored in a storage segment (bucket), and the objects are files, and the storage segment is folders (or directories), and like a hard disk, the objects and the storage segment can also be searched by Uniform Resource Identifiers (URIs). It is understood that files can be stored and retrieved in the distributed file system through the S3 protocol, and the S3 protocol can be understood as a series of interface specifications for operating files, for example, the file interface can be uploaded through the S3 protocol, and the annotation data required for model training can be uploaded to the distributed file system.
The NFS protocol, NFS, is shorthand for the networkfilesystems, i.e., network file system, which is one of the file systems supported by FreeBSD, also known as NFS. The NFS protocol may allow a system to share directories and files with other devices on a network. It will be appreciated that by using the NFS protocol, users and programs can access files on remote systems as well as local files. It should be noted that, since the container in the service system has no storage function, the storage of the container needs to be implemented by mounting, and in an implementation manner of this embodiment, NFS may be mounted as the storage of the container.
Specifically, in this embodiment, the distributed proxy server may be configured to store training data in the business system into the distributed file system by using the multiple types of transmission protocols, and mount data stored in the distributed file system into an experimental container in the business system.
Specifically, the distributed proxy server may store the training data in the data management module into the distributed file system through the storage service protocol; for example, taking mask recognition model development as an example, the data management module may upload the person image and label data with the mask to a distributed file system (such as an object storage ceph) through the S3 protocol.
The distributed proxy server can mount the data stored in the distributed file system to an experimental container in the service system through the file sharing protocol; in this way, the experimental container may read the data stored in the distributed file system through the file sharing protocol and operate on the data stored in the distributed file system, for example, the experimental container may read the training data stored in the distributed file system through the file sharing protocol and perform model training by using the training data to obtain a trained model. For example, taking mask recognition model development as an example, the experimental container may directly mount the mask-equipped person picture and the label data stored in the distributed file system (e.g., the object storage ceph) into the container through NFS protocol mapping, so that the mask-equipped person picture and the label data uploaded to the distributed file system may be directly operated in the container, for example, training a model by using the mask-equipped person picture and the label data. In addition, the experiment container may be further specifically configured to upload data to the distributed file system through the file sharing protocol, for example, after training of a model by the experiment container is completed, model data may be uploaded to the distributed file system through the file sharing protocol.
It can be seen that, in this embodiment, since the intermediate service layer (i.e. distributed proxy server) is encapsulated on the basis of the distributed file system, and the distributed proxy server supports the simple storage service protocol (e.g. S3 protocol) and the network file system protocol (e.g. NFS protocol) to operate on the distributed file system, the object storage and the container NFS storage (i.e. data management module and experiment container) may share one set of bottom storage (i.e. distributed file system), so that a file may be written into the distributed file system through the S3 interface of the distributed proxy server, and then the file is mounted into the container through the NFS protocol interface of the distributed proxy server, so that the experiment container may read the file, and thus when the experiment container needs to train the model by using the training data, it is not necessary to be as in the prior art, the training data needs to be carried, so that the time is saved, and the efficiency of training the model is improved.
It should be noted that, in an implementation manner of this embodiment, as shown in fig. 1, a data management module in a business system may be communicatively connected to a storage service protocol module in the distributed storage proxy server, the storage service protocol module in the distributed storage proxy server may be communicatively connected to a distributed file system, the distributed file system may be communicatively connected to a file sharing protocol module in the distributed storage proxy server, the file sharing protocol module may be communicatively connected to a container arrangement engine in the business system, and the container arrangement engine is connected to an experimental container.
It can be seen from the above technical solutions that the present application provides a development system, the system including: the system comprises a service system, a distributed storage proxy server and a distributed file system; the service system is connected with the distributed storage proxy server, and the distributed proxy server is connected with the distributed file system; wherein the distributed proxy server is configured with multiple types of transmission protocols; the business system is used for managing training data, and creating and managing an experiment container; the distributed proxy server is used for storing the training data in the service system into the distributed file system by utilizing the multiple types of transmission protocols, and mounting the data stored in the distributed file system into an experimental container in the service system; the distributed file system is used for storing the data stored by the distributed proxy server. Therefore, in the application, as the distributed proxy server is configured with multiple types of transmission protocols, the distributed storage proxy server can complete the storage and reading of the training data based on the distributed file system, and thus, the multiple types of transmission protocols can share one set of bottom storage (namely, the distributed file system), so that the training data does not need to be carried in the process of managing the training data and managing the experimental container by the business system, and the time for training the model in the experimental container by using the training data is further saved; in addition, a user can directly use the business system to create and manage the experiment container, namely, the whole process of modeling (such as writing codes, training models, deploying scripts and the like) can be carried out in the experiment container, and the efficiency of user modeling and deploying is improved.
It should be noted that, in the present specification, all the embodiments are described in a progressive manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. The above-described apparatus and system embodiments are merely illustrative, in that elements described as separate components may or may not be physically separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above description is only for the preferred embodiment, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A development system, the system comprising: the system comprises a service system, a distributed storage proxy server and a distributed file system; the service system is connected with the distributed storage proxy server, and the distributed proxy server is connected with the distributed file system; wherein the distributed proxy server is configured with multiple types of transmission protocols;
the business system is used for managing training data, and creating and managing an experiment container;
the distributed proxy server is used for storing the training data in the service system into the distributed file system by utilizing the multiple types of transmission protocols, and mounting the data stored in the distributed file system into an experimental container in the service system;
the distributed file system is used for storing the data stored by the distributed proxy server.
2. The development system of claim 1, wherein the business system comprises a data management module and a container orchestration engine;
the data management module is used for managing training data;
and the container arranging engine is used for creating and managing the experiment containers.
3. The development system of claim 2, wherein the container orchestration engine is kubernets.
4. The development system of claim 2, wherein the distributed proxy server is configured with two transport protocols, a storage service protocol and a file sharing protocol;
the distributed proxy server is specifically configured to store the training data in the data management module into the distributed file system through the storage service protocol; and mounting the data stored in the distributed file system to an experimental container in the service system through the file sharing protocol.
5. The development system of claim 4, wherein the experimental container is specifically configured to read training data stored in the distributed file system through the file sharing protocol, and perform model training using the training data to obtain a trained model.
6. The development system of claim 4, wherein the experimental container is specifically configured to upload data to the distributed file system via the file sharing protocol.
7. The development system of claim 4, wherein the storage service protocol comprises a simple storage service protocol.
8. The development system of claim 4, wherein the file sharing protocol comprises a network file system protocol.
9. The development system of claim 2, wherein the data management module is specifically configured to add, delete, and query training data.
10. The development system of claim 2, wherein the container orchestration engine is configured with a code development tool; the container coding engine is specifically used for creating an experiment container, and inputting codes and deployment scripts and creating, importing and training a model in the experiment container by using the code development tool.
CN202110018599.1A 2021-01-07 2021-01-07 Development system Active CN112800018B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110018599.1A CN112800018B (en) 2021-01-07 2021-01-07 Development system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110018599.1A CN112800018B (en) 2021-01-07 2021-01-07 Development system

Publications (2)

Publication Number Publication Date
CN112800018A true CN112800018A (en) 2021-05-14
CN112800018B CN112800018B (en) 2021-09-21

Family

ID=75808968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110018599.1A Active CN112800018B (en) 2021-01-07 2021-01-07 Development system

Country Status (1)

Country Link
CN (1) CN112800018B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014074957A1 (en) * 2012-11-08 2014-05-15 Sparkledb As Systems and methods involving resource description framework distributed data base managenent systems and/or related aspects
CN106570151A (en) * 2016-10-28 2017-04-19 上海斐讯数据通信技术有限公司 Data collection processing method and system for mass files
CN111488332A (en) * 2020-04-21 2020-08-04 北京智能工场科技有限公司 AI service opening middle platform and method
CN112087423A (en) * 2020-07-29 2020-12-15 深圳市国电科技通信有限公司 Method, device and system for cloud-side cooperative management of terminal equipment
CN112130965A (en) * 2020-10-26 2020-12-25 腾讯科技(深圳)有限公司 Method, equipment and storage medium for deploying distributed container arrangement management cluster

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014074957A1 (en) * 2012-11-08 2014-05-15 Sparkledb As Systems and methods involving resource description framework distributed data base managenent systems and/or related aspects
CN106570151A (en) * 2016-10-28 2017-04-19 上海斐讯数据通信技术有限公司 Data collection processing method and system for mass files
CN111488332A (en) * 2020-04-21 2020-08-04 北京智能工场科技有限公司 AI service opening middle platform and method
CN112087423A (en) * 2020-07-29 2020-12-15 深圳市国电科技通信有限公司 Method, device and system for cloud-side cooperative management of terminal equipment
CN112130965A (en) * 2020-10-26 2020-12-25 腾讯科技(深圳)有限公司 Method, equipment and storage medium for deploying distributed container arrangement management cluster

Also Published As

Publication number Publication date
CN112800018B (en) 2021-09-21

Similar Documents

Publication Publication Date Title
CN109194506B (en) Block chain network deployment method, platform and computer storage medium
CN108279932B (en) Method and device for dynamically configuring user interface of mobile terminal
CN106104514B (en) Accelerate method, system and the medium of the object in access object repository
CN109391664A (en) System and method for the deployment of more cluster containers
CN102542045B (en) Unified for resource is accessed
CN102541638B (en) Resource management system and method
CN106663002B (en) REST service source code generation
CN104050216B (en) For customizing the file system manager of resource allocation
US20060075071A1 (en) Centralized management of digital files in a permissions based environment
US20140040791A1 (en) Development platform for software as a service (saas) in a multi-tenant environment
GB2430328A (en) Modelling/simulating a network node including a plurality of protocol layers with selectively configurable switches disposed between and coupling the layers
Baresi et al. Workflow partitioning in mobile information systems
CN110309264A (en) The method and apparatus of knowledge based map acquisition geographic products data
Sciullo et al. Wot store: Enabling things and applications discovery for the w3c web of things
CN105808753B (en) A kind of regionality digital resources system
CN107707625A (en) Foreground resource based on Maven is packed and carries out version management and the method used
CN108255915B (en) File management method and device and machine-readable storage medium
CN112148593B (en) Test case management method, device and equipment
CN109213498A (en) A kind of configuration method and server of internet web front-end
CN111897623B (en) Cluster management method, device, equipment and storage medium
CN102314358A (en) Method for deploying conventional applications on cloud platform in SOA (service oriented architecture) way
CN107133036B (en) Module management method and device
CN107133160A (en) Test system
CN106371931B (en) A kind of high-performance geoscience computing service system based on Web frame
CN112800018B (en) Development system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240108

Address after: No. N3013, 3rd Floor, R&D Building N, Artificial Intelligence Science and Technology Park, Wuhan Economic and Technological Development Zone, Wuhan City, Hubei Province, 430058

Patentee after: Zhongdian Cloud Computing Technology Co.,Ltd.

Patentee after: CHINA ELECTRONIC SYSTEM TECHNOLOGY Co.,Ltd.

Address before: No.49 Fuxing Road, Haidian District, Beijing 100036

Patentee before: CHINA ELECTRONIC SYSTEM TECHNOLOGY Co.,Ltd.