CN114579142A - Dual-mode big data version deployment method, device and equipment supporting steady state and sensitive state - Google Patents

Dual-mode big data version deployment method, device and equipment supporting steady state and sensitive state Download PDF

Info

Publication number
CN114579142A
CN114579142A CN202210210624.0A CN202210210624A CN114579142A CN 114579142 A CN114579142 A CN 114579142A CN 202210210624 A CN202210210624 A CN 202210210624A CN 114579142 A CN114579142 A CN 114579142A
Authority
CN
China
Prior art keywords
version
state
big data
data
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210210624.0A
Other languages
Chinese (zh)
Inventor
林华兵
张晨林
王幼芝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Construction Bank Corp
Original Assignee
China Construction Bank Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Construction Bank Corp filed Critical China Construction Bank Corp
Priority to CN202210210624.0A priority Critical patent/CN114579142A/en
Publication of CN114579142A publication Critical patent/CN114579142A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Stored Programmes (AREA)

Abstract

The invention relates to the technical field of operation and maintenance data, and provides a method, a device and equipment for deploying a dual-mode big data version supporting steady state and sensitive state. The deployment method of the dual-mode big data version supporting steady-state and sensitive-state comprises the following steps: acquiring the current state of the big data application version, wherein the current state is one of a development state, a test state and an operation state; in response to deploying the migration instruction, performing the steps of: checking the basic environment consistency, the application version consistency and the environment configuration strategy uniformity of a source platform and a target platform in the migration process; organizing the data to be migrated in a project unit according to the current state at the source platform to generate a version package of the big data application; and importing the version package of the big data application into the target platform and deploying on the target platform. The implementation method provided by the invention can improve the deployment efficiency of the big data version.

Description

Dual-mode big data version deployment method, device and equipment supporting steady state and sensitive state
Technical Field
The invention relates to the technical field of operation and maintenance data, in particular to a stable state and sensitive state supporting dual-mode big data version deployment method, a stable state and sensitive state supporting dual-mode big data version deployment device, stable state and sensitive state supporting dual-mode big data version deployment equipment and a corresponding storage medium.
Background
The existing commercial big data platform supports the online development and production of applications by distinguishing development, test and production resources in a set of environment. The method is feasible for small-scale data processing scenes with low production requirements, but in large-scale data processing environments with high requirements on correctness, stability and safety of production versions, because of large data volume and high computational complexity, firstly, a plurality of sets of test environments are required for application version verification of different version points; secondly, when the task integration test and the task release are carried out, higher requirements are provided for the integrity, the dependency relationship and the running environment of the task, and if the data application task to be put into operation cannot be completely imported into the production environment or the required task running resource does not exist in the production environment, the released task version cannot be used.
The existing big data cloud platform mainly has two methods during version release or migration, and firstly, development, test and production of data application are simultaneously supported in one set of environment, resources are shared, and environmental functions are not segmented. And secondly, the development, test and production environments are isolated in a set of physical environment, and the release and migration of the application version among different environments are realized through a function similar to 'one-key release'.
The first mode supports development, testing and production of data application in one set of environment, all links compete resources mutually, and the safety supervision requirement of mutual isolation of the financial big data application iterative development, testing and production environment cannot be met. And the second mode isolates development, test and production environments in a set of physical environment, and the logic isolation mode cannot effectively meet the requirements of the financial big data scene on ensuring safe and stable operation of production and simultaneously carrying out exploratory data application.
Disclosure of Invention
The embodiment of the invention aims to provide a method, a device and equipment for deploying a dual-mode big data version supporting steady-state and sensitive-state.
In order to achieve the above object, a first aspect of the present invention provides a dual-mode big data version deployment method supporting steady-state and sensitive-state, including: acquiring the current state of the big data application version, wherein the current state is one of a development state, a test state and an operation state; in response to deploying the migration instruction, performing the steps of: checking the basic environment consistency, the application version consistency and the environment configuration strategy uniformity of a source platform and a target platform in the migration process; organizing the data to be migrated in a project unit according to the current state at the source platform to generate a version package of the big data application; judging whether the big data application version is a steady-state big data application version or a sensitive-state big data application version; and importing the version package of the big data application into the target platform by adopting different importing and deploying modes according to the judgment result, and deploying on the target platform.
Preferably, the base environment consistency includes: running the attribute of the database, the execution path of the development kit and the consistency of the file directories; the application version consistency comprises the following steps: the platform versions of the source platform and the target platform are not lower than the version requirements of the application version; the environment configuration policy unification comprises the following steps: the storage path of the process file, the path of the plug-in required for running, the metadata information and the unification of the resource group names.
Preferably, organizing the data to be migrated in units of projects according to the current state at the source platform, and generating a version package of the big data application, including: acquiring a project selected by a user; respectively acquiring data of the project corresponding to each component through the components on the source platform, wherein the components comprise: the system comprises a data development component, a data management component, a data visualization component, a data mining component and a data service component; the data acquired by the data management component comprises metadata information.
Preferably, the data development component, the data management component, the data visualization component, the data mining component and the data service component on the source platform have a uniform data export interface on the source platform; and each component acquires the export parameters input by the user through the corresponding tab on the data export interface.
Preferably, before importing the version package of the big data application into the target platform, the method further includes: distributing resources required by the deployment of the big data application version on the target platform; the resources comprise virtual machine resources and container resources; the allocated resources are related to the environment, and the big data application version.
Preferably, according to the determination result, importing the version package of the big data application into the target platform by using different importing deployment manners, and deploying on the target platform, including: the importing and deploying mode of the steady-state big data application version comprises the following steps: according to an import mode, importing the application version and the metadata information in the version package of the big data application by taking the project as a unit; the import mode is a full-quantity import mode or an incremental import mode; the importing and deploying mode of the sensitive big data application version comprises the following steps: and releasing the version package of the big data application to a corresponding state, wherein the corresponding state comprises a test state or an operation state, and then importing the application version and the metadata information in the version package of the big data application in a full mode by taking the project as a unit.
In a second aspect of the present invention, there is also provided a dual-mode big data version deployment apparatus supporting steady-state and sensitive-state, including: the state determining module is used for acquiring the current state of the big data application version, wherein the current state is one of a development state, a test state and a running state; a migration execution module, configured to perform the following steps in response to the deployment migration instruction: the platform checking submodule is used for checking the basic environment consistency, the application version consistency and the environment configuration strategy consistency of the source platform and the target platform in the migration process; the data export submodule is used for organizing the data to be migrated in a project unit according to the current state on the source platform to generate a version package of the big data application; judging whether the big data application version is a steady-state big data application version or a sensitive-state big data application version by a type judgment sub-module; and importing the version package of the big data application into the target platform by adopting different importing deployment modes through a data importing submodule according to the judgment result of the type judging submodule, and deploying on the target platform.
Preferably, the base environment consistency includes: running the attribute of the database, the execution path of the development kit and the consistency of the file directories; the application version consistency comprises the following steps: the platform versions of the source platform and the target platform are not lower than the version requirements of the application version; the environment configuration policy unification comprises the following steps: the storage path of the process file, the path of the plug-in required for running, the metadata information and the unification of the resource group names.
Preferably, organizing the data to be migrated in units of projects according to the current state at the source platform, and generating a version package of the big data application, including: acquiring a project selected by a user; respectively acquiring data of the project corresponding to each component through the components on the source platform, wherein the components comprise: the system comprises a data development component, a data management component, a data visualization component, a data mining component and a data service component; the data acquired by the data management component comprises metadata information.
Preferably, the data development component, the data management component, the data visualization component, the data mining component and the data service component on the source platform have a uniform data export interface on the source platform; and each component acquires the export parameters input by the user through the corresponding tab on the data export interface.
Preferably, before importing the version package of the big data application into the target platform, the method further includes: allocating resources required for deployment of the big data application version on the target platform; the resources comprise virtual machine resources and container resources; the allocated resources are related to the environment, and the big data application version.
Preferably, according to the determination result, importing the version package of the big data application into the target platform by using different importing deployment manners, and deploying on the target platform, including: the importing and deploying mode of the steady-state big data application version comprises the following steps: according to an import mode, importing the application version and the metadata information in the version package of the big data application by taking the project as a unit; the import mode is a full-quantity import mode or an incremental import mode; the importing and deploying mode of the sensitive big data application version comprises the following steps: and releasing the version package of the big data application to a corresponding state, wherein the corresponding state comprises a test state or an operation state, and then importing the application version and the metadata information in the version package of the big data application in a full mode by taking the project as a unit.
A third aspect of the present application provides a dual-mode big data version deployment device supporting steady-state and sensitive states, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the foregoing dual-mode big data version deployment method supporting steady-state and sensitive states when executing the computer program.
A fourth aspect of the present application provides a computer-readable storage medium, which stores instructions that, when executed on a computer, cause the computer to execute the foregoing dual-mode big data version deployment method supporting steady-state and sensitive-state.
A fifth aspect of the present application provides a computer program, which when executed by a processor implements the foregoing dual-mode big-data version deployment method supporting steady state and sensitive state.
The technical scheme has the following beneficial effects:
the requirements of steady-state project development, test and release of strong operation management can be supported in a set of large data platform environment, and the agile delivery requirement of innovative projects of online exploration and rapid development can also be met. The isolation of application development and production is realized by designing corresponding development state, test state and running state. And through the mode of 'project' guidance and environmental information pre-configuration, a plurality of production baseline versions can be respectively developed and verified in different test environments, so that the data application is further supported to be rapidly migrated and run among a plurality of sets of big data environments. On the premise of application isolation and resource isolation, the efficiency and the safety of parallel development and production of large-scale data application are improved.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the embodiments of the invention without limiting the embodiments of the invention. In the drawings:
FIG. 1 is a schematic flow chart diagram illustrating a dual-mode big data version deployment method supporting steady-state and sensitive states according to an embodiment of the present application;
FIG. 2 schematically shows a flow diagram of an export process according to an embodiment of the application;
fig. 3 schematically shows a structural block diagram of a dual-mode big data version deployment device supporting steady-state and sensitive-state according to an embodiment of the present application.
Detailed Description
The following describes in detail embodiments of the present invention with reference to the drawings. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration and explanation only, not limitation.
According to the technical scheme, the data acquisition, storage, use, processing and the like meet the relevant regulations of national laws and regulations.
Fig. 1 schematically shows a flowchart of a dual-mode big data version deployment method supporting steady-state and sensitive states according to an embodiment of the present application. As shown in fig. 1, in this embodiment, a dual-mode big data version deployment method supporting steady-state and sensitive-state is provided, including:
s01, acquiring the current state of the big data application version, wherein the current state is one of a development state, a test state and an operation state;
s02, responding to the deployment migration instruction, executing the following steps:
s021, checking the basic environment consistency, the application version consistency and the environment configuration strategy consistency of a source platform and a target platform in the migration process;
s022, organizing data to be migrated in a project unit according to the current state on the source platform, and generating a version package of a big data application;
s023, judging whether the big data application version is a steady-state big data application version or a sensitive-state big data application version;
and S024, importing the version package of the big data application into the target platform by adopting different importing and deploying modes according to a judgment result, and deploying on the target platform.
In the above embodiment, the phase in which the large data application version is located is first determined by "tri-state". "tristate" as used herein refers to the development, test, and production states of an application. The three stages are independent from each other and respectively complete corresponding development, test and production tasks. When a deployment migration instruction is received, subsequent steps are performed. The deployment migration instruction herein refers to an instruction which has a function of triggering the deployment migration process, and the specific format of the instruction is different according to different predefined formats. The acquisition here includes directly acquiring the instruction input by the user through the input device, acquiring the instruction transmitted by other devices, and automatically generating the instruction according to the user program.
The application development test environment (corresponding to the development state and the test state of the application) and the application commissioning operation environment (corresponding to the operation state of the application) of the big data platform are respectively built in different enterprises or different scenes, so that the environments of the big data platforms are relatively independent. Large data application versions need to be quickly deployed to the application testing or commissioning environment everywhere after each release. In order to ensure the smooth operation of each application development test environment and application commissioning operation environment, each application environment needs to have a consistent basic environment configuration, i.e. the operation bases are consistent. Therefore, before executing the migration process, the consistency of the basic environment, the consistency of the application version and the consistency of the environment configuration policy of the source platform and the target platform of the migration process need to be checked.
During the execution of the migration process, data export on the source platform and data import on the target platform are required to be performed respectively. Step S022 and step S024 are respectively used for executing the above operations, by which the big data application version on the source platform can be deployed to the target platform. Step S023 is used to determine whether the big data application version is in a steady state or a sensitive state. The distinction between steady and sensitive states is deterministic high and low and availability high and low. And S024, respectively realizing different importing and deploying modes according to the distinguishing standard of the steady state or the sensitive state.
Through the implementation mode, the large data application version can be rapidly migrated and released to an integrated test environment and a production environment in a steady-state and sensitive mode. Otherwise, the application version can be imported from the production environment to the development and test environment to realize the synchronization and backup of the version.
In one embodiment of the present invention, the base environment consistency includes: running the attribute of the database, the execution path of the development kit and the consistency of the file directories; the application version consistency comprises the following steps: the platform versions of the source platform and the target platform are not lower than the version requirements of the application version; the environment configuration policy unification comprises the following steps: the storage path of the process file, the path of the plug-in required by operation, metadata information and the uniformity of resource group names.
As described above, before executing the migration process, it is necessary to check the base environment consistency, the application version consistency, and the environment configuration policy consistency of the source platform and the target platform of the migration process, which are respectively illustrated in the above three items below.
The base environment consistency includes: for example: running attributes such as names, paths, connection information and the like of the database; a development suite path provided by the platform; public lib, bin, etc. file directories; a platform running log and a root directory stored by the application running log; and a root directory of the data service bus. Through the consistent basic environment configuration, the platform version can be ensured to be operated and maintained in the same way after each environment is upgraded and deployed. And the basic environment configuration is relatively stable, is downward compatible when the platform version is deployed, and does not cover the original environment configuration.
The application version consistency comprises the following steps: the platform versions of the source platform and the target platform are not lower than the version requirements of the application version. The version requirement of the application version has strong dependency relationship with the platform version. When the platform version is handed over, the version requirements of the application version cannot be met. For example: a data application developed based on the platform version V3.1 can only run on platform versions of V3.1 or higher, but cannot run on platform versions below V3.1.
The environment configuration policy unification comprises the following steps: the storage path of the process file, the path of the plug-in required by operation, metadata information and the uniformity of resource group names. Specific examples thereof include: the application uses the relevant files and log directories, including data acquisition, data service and data storage path of the intermediate result; a path of the plug-in on which the application is developed and run; metadata information; and resource group names used by the application. When the platform version is upgraded and deployed to each application development test and production environment in a synchronous mode, the application version does not need to be accompanied with version information of the platform when packaged. When the application version is packaged, the application configuration information is required to be attached, so that when other environments are imported, normal execution of the data application is realized based on the complete application version and the configuration information.
In an embodiment provided by the present invention, organizing, in the source platform, data to be migrated in units of a project according to the current state, and generating a version package of a big data application, includes: acquiring an item selected by a user; respectively acquiring data of the project corresponding to each component through the components on the source platform, wherein the components comprise: the system comprises a data development component, a data management component, a data visualization component, a data mining component and a data service component; the data acquired by the data management component comprises metadata information. The big data platform constructs various data applications by taking 'project' as a unit. A big data application project is usually implemented by a certain project group, the project may include a series of tasks for implementing application logic, such as data collection, data integration, visual report, data service, etc., and the same set of resource group is used during operation. Based on the above framework, packaging and construction of the application version on the source platform are also performed in units of "projects", so that the embodiment can avoid missing of partial version contents caused by manual task or job selection.
Fig. 2 schematically shows a flow diagram of an export process according to an embodiment of the application. As shown in fig. 2, the data development, data visualization, data mining and data service components on the source platform respectively obtain data of the project corresponding to the components. For example: the data acquired by the data development component is a job flow (DAG); the data mining component obtains a predictive model and the like. And combining the data acquired by each component to obtain a version package of the big data application.
The data management component of the big data platform provides a uniform data access interface, a base table structure, data quality rules, label information and data authority management functions. The metadata information is acquired through the data management component, synchronous migration and synchronous effectiveness of the metadata information and the application version can be achieved, logic isolation between the metadata information and the application version can be achieved, and risks caused by mutual operation are reduced.
In one embodiment provided by the invention, a data development component, a data management component, a data visualization component, a data mining component and a data service component on the source platform have a uniform data export interface on the source platform; and each component acquires the export parameters input by the user through the corresponding tab on the data export interface. In order to provide a consistent "tri-state" migration experience and uniform operational access to users and applications, the big data platform aggregates the export entries of application versions generated by all components into one uniform data export interface. Because different types of tasks have different operating parameters, the unified data export interface displays the operating data export interfaces of various tasks or workflows through different tabs (similar to sheet pages in excel). For example, in the "data development" export tab, the user sets job flow export parameters including job flow name, version, periodic information of operation (daily, weekly, monthly, etc.), resource group name of use, etc.; or for example: and the 'data visualization' export tab is used for setting task parameters of a report to be exported or a visualization large screen, wherein the task parameters comprise a report name, used data processing logic, users having access rights and the like. And similarly, for example, in a "data management" export tab, a data administrator can select and configure metadata information such as database table structure information, connection information, tag information, etc. corresponding to an application. By exporting the tab in each classification, configuring related task export parameters by personnel with different roles and calling export interfaces provided by each component, the version package of the big data application is constructed.
In an embodiment provided by the present invention, before importing the version package of the big data application into the target platform, the method further includes: allocating resources required for deployment of the big data application version on the target platform; the resources comprise virtual machine resources and container resources; the allocated resources are associated with a current state of the big data application version. Before importing the target platform, the application environment needs to be configured in advance. Resources herein include, but are not limited to, virtual machine resources and container resources. The application development state, the test state and the running state have the same running resource type, mainly comprise virtual machine resources and container resources, but the resource amount is different, and the resource configuration of the development state and the test state is usually smaller than that of the running state. For example: the resource group of the streaming job development used by the application App1 in a development test state (corresponding to a development test environment) is flash 1, and the corresponding resource amount is 1 CU; then in the application's on-stream state (running environment), a resource group flink1 for streaming job development is also configured, corresponding to a resource amount of 4 CU. The configuration of these resource groups requires higher management authority, and the application can be ensured to run using the resource groups of the same name after migrating to different environments by a system administrator through a mode of operator pre-configuration instead of a mode of export and import.
In an embodiment provided by the present invention, importing, according to a determination result, the version package of the big data application to the target platform in different importing deployment manners, and deploying on the target platform, includes: the importing and deploying mode of the steady-state big data application version comprises the following steps: according to an import mode, importing the application version and the metadata information in the version package of the big data application by taking the project as a unit; the import mode is a full-quantity import mode or an incremental import mode; the importing and deploying mode of the sensitive big data application version comprises the following steps: and releasing the version package of the big data application to a corresponding state, wherein the corresponding state comprises a test state or an operation state, and then importing the application version and the metadata information in the version package of the big data application in a full mode by taking the project as a unit. The following describes the steady-state and sensitive big data application version import processes respectively:
the steady-state big data application version importing process is similar to the application version exporting process, and the operation of classifying and importing the application version is supported in the form of a uniform entry and a plurality of import tabs in the environment of the application version to be imported. And the application version takes the 'project' as a unit, and the uploaded application version and the metadata information are imported. The system supports 'full amount' or 'incremental' mode import, and for the same-name projects and the same-name jobs existing in the migration target environment, including the environment of version verification and the like, the import can use the task name and the task ID as main keys (to avoid repetition) to carry out incremental updating, so that the application information which does not exist in the version cannot be covered. While "full" updates are used to fully cover all tasks in a component under a project, this approach can be used to achieve the removal of useless tasks. All the updating processes have corresponding updating information to be searched.
The item type corresponding to the sensitive big data application version importing process is an integrated item. Such application items also use the same version build function and import-export function as the steady state. The difference is that the sensitive publication does not migrate the application version using version-native relays, but builds the version in the "project" dimension in its entirety. After the approval is passed, the data are directly released to the corresponding test state and production state through a three-state inline network, and then the project is updated in a full amount, namely, the one-key release is realized.
The difference between incremental and full-scale import is described as follows: when the incremental import is selected, for the same-name items and the same-name jobs existing in the migration target environment, such as the environment of version verification and the like, the 'three-state' import can carry out incremental updating by taking the task name and the task ID as main keys, and the application information which is not in the version cannot be covered. For example: the data integration tasks 'Job 1' and 'Job 2' exist in the 'project 1' of the original production environment, the version number is V1.0, the new version number of the integrated task 'Job 1' in the 'project 1' with the same name is imported, the new version number is V1.1 or V0.9 (which can be used for version rollback), the original version is covered by the new version 'Job 1' during import, and the 'Job 2' keeps the original version unchanged because the version package does not have the new version 'Job 2'.
When a full mode is selected for import, the mode is used for fully covering all tasks in a certain component under a certain project, and the mode can be used for deleting useless tasks. All the updating processes have corresponding updating information to be searched. The application version import logic of other components is the same, and is not described in detail here.
Through the implementation mode, the quick migration and running of the big data application among a plurality of sets of big data environments can be realized. And on the premise of application isolation and resource isolation, the efficiency and the safety of parallel development and production of big data application are improved.
Based on the same inventive concept, fig. 3 schematically illustrates a structural block diagram of a dual-mode big data version deployment apparatus supporting a steady-state and a sensitive state according to an embodiment of the present application, and as shown in fig. 3, in an embodiment of the present application, there is provided a dual-mode big data version deployment apparatus supporting a steady-state and a sensitive state, the apparatus includes: the state determining module is used for acquiring the current state of the big data application version, wherein the current state is one of a development state, a test state and a running state; a migration execution module, configured to perform the following steps in response to the deployment migration instruction: the platform checking submodule is used for checking the basic environment consistency, the application version consistency and the environment configuration strategy consistency of the source platform and the target platform in the migration process; the data export submodule is used for organizing the data to be migrated in a project unit according to the current state on the source platform to generate a version package of the big data application; judging whether the big data application version is a steady-state big data application version or a sensitive-state big data application version by a type judgment sub-module; and importing the version package of the big data application into the target platform by adopting different importing and deploying modes through a data importing submodule according to the judging result of the type judging submodule, and deploying on the target platform.
In some alternative embodiments, the base environment consistency comprises: running the attribute of the database, the execution path of the development kit and the consistency of the file directories; the application version consistency comprises the following steps: the platform versions of the source platform and the target platform are not lower than the version requirements of the application version; the environment configuration policy unification comprises the following steps: the storage path of the process file, the path of the plug-in required for running, the metadata information and the unification of the resource group names.
In some optional embodiments, organizing, at the source platform, data to be migrated in units of items according to the current state, and generating a version package of a big data application includes: acquiring a project selected by a user; respectively acquiring data of the project corresponding to each component through the components on the source platform, wherein the components comprise: the system comprises a data development component, a data management component, a data visualization component, a data mining component and a data service component. The data acquired by the data management component comprises metadata information.
In some optional embodiments, the data development component, the data management component, the data visualization component, the data mining component, and the data services component on the source platform have a unified data export interface on the source platform; and each component acquires the export parameters input by the user through the corresponding tab on the data export interface.
In some optional embodiments, before importing the version package of the big data application into the target platform, the method further includes: allocating resources required for deployment of the big data application version on the target platform; the resources comprise virtual machine resources and container resources; the allocated resources are related to the environment, and the big data application version.
In some optional embodiments, importing, according to a determination result, the version package of the big data application to the target platform in different import deployment manners, and deploying on the target platform, includes: the importing and deploying mode of the steady-state big data application version comprises the following steps: according to an import mode, importing the application version and the metadata information in the version package of the big data application by taking the project as a unit; the import mode is a full-quantity import mode or an incremental import mode; the importing and deploying mode of the sensitive big data application version comprises the following steps: and releasing the version package of the big data application to a corresponding state, wherein the corresponding state comprises a test state or an operation state, and then importing the application version and the metadata information in the version package of the big data application in a full mode by taking the project as a unit.
The above dual-mode big data version deployment device supporting steady-state and sensitive-state comprises a processor and a memory, the above state determination module and the like are stored in the memory as program units, and the processor executes the above program modules stored in the memory to implement corresponding functions.
The processor comprises a kernel, and the kernel calls the corresponding program unit from the memory. One or more than one kernel can be set, and the method for deploying the dual-mode big data version supporting the stable state and the sensitive state is realized by adjusting kernel parameters.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
Those skilled in the art will appreciate that the architecture shown in fig. 3 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The embodiment of the application provides equipment, which comprises a processor, a memory and a program which is stored on the memory and can run on the processor, wherein the processor realizes the steps of the dual-mode big data version deployment method supporting steady state and sensitive state when executing the program.
The present application further provides a computer program product adapted to perform a program initialized with the steps of a dual-mode big data version deployment method supporting steady-state and sensitive-state when executed on a data processing device.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, which include both non-transitory and non-transitory, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (15)

1. A dual-mode big data version deployment method supporting steady state and sensitive state is characterized by comprising the following steps:
acquiring a current state of a big data application version, wherein the current state is one of a development state, a test state and an operation state;
in response to deploying the migration instruction, performing the steps of:
determining the basic environment consistency, the application version consistency and the environment configuration strategy uniformity of a source platform and a target platform in the migration process;
organizing data to be migrated in a project unit according to the current state of the big data application version at the source platform to generate a version package of the big data application;
judging whether the big data application version is a steady-state big data application version or a sensitive-state big data application version;
and importing the version package of the big data application into the target platform by adopting different importing and deploying modes according to the judgment result, and deploying on the target platform.
2. The method of claim 1,
the base environment consistency includes: running the attribute of the database, the execution path of the development kit and the consistency of the file directories;
the application version consistency comprises: whether the platform versions of the source platform and the target platform are not lower than the version requirements of the application version;
the environment configuration policy unification comprises: the storage path of the process file, the path of the plug-in required for running, the metadata information and the unification of the resource group names.
3. The method according to claim 1, wherein organizing the data to be migrated in terms of items at the source platform according to the current state of the big data application version to generate a version package of the big data application, comprises:
acquiring a project selected by a user;
respectively acquiring data of the project corresponding to each component through the components on the source platform, wherein the components comprise:
the system comprises a data development component, a data management component, a data visualization component, a data mining component and a data service component;
the data acquired by the data management component comprises metadata information.
4. The method of claim 3,
the data development component, the data management component, the data visualization component, the data mining component and the data service component on the source platform have a uniform data export interface on the source platform;
and each component acquires the export parameters input by the user through the corresponding tab on the data export interface.
5. The method of claim 1, wherein prior to importing the version package of the big data application into the target platform, the method further comprises:
allocating resources required for deployment of the big data application version on the target platform; the resources comprise virtual machine resources and container resources;
wherein the allocated resources are associated with a current state of the big data application version.
6. The method according to claim 1, wherein importing the version package of the big data application to the target platform in different importing deployment manners according to the determination result, and deploying on the target platform, includes:
the importing and deploying mode of the steady-state big data application version comprises the following steps: according to an import mode, importing the application version and the metadata information in the version package of the big data application by taking the project as a unit; the import mode is a full-quantity import mode or an incremental import mode;
the importing and deploying mode of the sensitive big data application version comprises the following steps: and releasing the version package of the big data application to a corresponding state, wherein the corresponding state comprises a test state or an operation state, and then importing the application version and the metadata information in the version package of the big data application in a full mode by taking the project as a unit.
7. A dual-mode big data version deployment device supporting steady state and sensitive state is characterized by comprising:
the state determining module is used for acquiring the current state of the big data application version, wherein the current state is one of a development state, a test state and an operation state;
the migration execution module is used for responding to the deployment migration instruction and executing the deployment migration step through the following sub-modules:
the platform checking submodule is used for checking the basic environment consistency, the application version consistency and the environment configuration strategy consistency of the source platform and the target platform in the migration process;
the data export submodule is used for organizing data to be migrated in a project unit on the source platform according to the current state of the big data application version to generate a version package of the big data application;
the type judgment sub-module is used for judging whether the big data application version is a steady-state big data application version or a sensitive-state big data application version; and
and the data import submodule is used for importing the version package of the big data application into the target platform by adopting different import deployment modes according to the judgment result of the type judgment submodule and deploying the version package on the target platform.
8. The apparatus of claim 7,
the base environment consistency includes: the attribute of the operation database, the execution path of the development kit and the consistency of the operation file directory;
the application version consistency comprises: whether the platform versions of the source platform and the target platform are not lower than the version requirements of the application version;
the environment configuration policy unification comprises: the storage path of the process file, the path of the plug-in required for running, the metadata information and the unification of the resource group names.
9. The apparatus according to claim 7, wherein organizing the data to be migrated in terms of items at the source platform according to the current state of the big data application version to generate a version package of the big data application, comprises:
acquiring a project selected by a user;
respectively acquiring data of the project corresponding to each component through the components on the source platform, wherein the components comprise:
the system comprises a data development component, a data management component, a data visualization component, a data mining component and a data service component;
the data acquired by the data management component comprises metadata information.
10. The apparatus of claim 9,
the data development component, the data management component, the data visualization component, the data mining component and the data service component on the source platform have a uniform data export interface on the source platform;
and each component acquires the export parameters input by the user through the corresponding tab on the data export interface.
11. The apparatus of claim 7, wherein the data import sub-module, prior to importing the version package of the big data application into the target platform, is further configured to:
allocating resources required for deployment of the big data application version on the target platform; the resources comprise virtual machine resources and container resources;
wherein the allocated resources are associated with a current state of the big data application version.
12. The apparatus of claim 7,
according to the judgment result, importing the version package of the big data application into the target platform by adopting different importing and deploying modes, and deploying on the target platform, wherein the importing and deploying modes comprise the following steps:
the importing and deploying mode of the steady-state big data application version comprises the following steps: according to an import mode, importing the application version and the metadata information in the version package of the big data application by taking the project as a unit; the import mode is a full-quantity import mode or an incremental import mode;
the importing and deploying mode of the sensitive big data application version comprises the following steps: and releasing the version package of the big data application to a corresponding state, wherein the corresponding state comprises a test state or an operation state, and then importing the application version and the metadata information in the version package of the big data application in a full mode by taking the project as a unit.
13. A dual-mode big data version deployment device supporting steady-state and sensitive states, comprising a memory, a processor and a computer program stored in the memory and operable on the processor, wherein the processor implements the dual-mode big data version deployment method supporting steady-state and sensitive states as claimed in any one of claims 1 to 6 when executing the computer program.
14. A computer-readable storage medium, wherein the storage medium has stored therein instructions, which when executed on a computer, cause the computer to execute the dual-mode big data version deployment method supporting steady-state and sensitive-state as claimed in any one of claims 1 to 6.
15. A computer program product comprising a computer program which, when executed by a processor, implements the dual-mode big data version deployment method supporting steady-state and sensitive states of any of claims 1 to 6.
CN202210210624.0A 2022-03-04 2022-03-04 Dual-mode big data version deployment method, device and equipment supporting steady state and sensitive state Pending CN114579142A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210210624.0A CN114579142A (en) 2022-03-04 2022-03-04 Dual-mode big data version deployment method, device and equipment supporting steady state and sensitive state

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210210624.0A CN114579142A (en) 2022-03-04 2022-03-04 Dual-mode big data version deployment method, device and equipment supporting steady state and sensitive state

Publications (1)

Publication Number Publication Date
CN114579142A true CN114579142A (en) 2022-06-03

Family

ID=81778411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210210624.0A Pending CN114579142A (en) 2022-03-04 2022-03-04 Dual-mode big data version deployment method, device and equipment supporting steady state and sensitive state

Country Status (1)

Country Link
CN (1) CN114579142A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116450534A (en) * 2023-06-19 2023-07-18 建信金融科技有限责任公司 Method, device, equipment and medium for generating mobile terminal application program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116450534A (en) * 2023-06-19 2023-07-18 建信金融科技有限责任公司 Method, device, equipment and medium for generating mobile terminal application program
CN116450534B (en) * 2023-06-19 2023-08-22 建信金融科技有限责任公司 Method, device, equipment and medium for generating mobile terminal application program

Similar Documents

Publication Publication Date Title
CN110809017B (en) Data analysis application platform system based on cloud platform and micro-service framework
US10042903B2 (en) Automating extract, transform, and load job testing
US8321856B2 (en) Supplying software updates synchronously
US11327744B2 (en) Equivalency of revisions on modern version control systems
US20110107327A1 (en) Assisting server migration
US9959336B2 (en) Compiling extract, transform, and load job test data cases
CN107783816A (en) The method and device that creation method and device, the big data cluster of virtual machine create
CN112866333A (en) Cloud-native-based micro-service scene optimization method, system, device and medium
KR20200115020A (en) Hyperledger fabric network creation method, controller and storage medium
CN105786696A (en) Test method and device
US10503630B2 (en) Method and system for test-execution optimization in an automated application-release-management system during source-code check-in
CN112835924A (en) Real-time computing task processing method, device, equipment and storage medium
JPWO2017033441A1 (en) System construction support system, method, and storage medium
Strauch et al. Decision support for the migration of the application database layer to the cloud
CN112395196A (en) Data operation development test method, device, equipment, system and storage medium
CN111897623A (en) Cluster management method, device, equipment and storage medium
US11704114B2 (en) Data structures for managing configuration versions of cloud-based applications
CN106951593B (en) Method and device for generating configuration file of protection measurement and control device
CN114579142A (en) Dual-mode big data version deployment method, device and equipment supporting steady state and sensitive state
US11966732B2 (en) Data structures for managing configuration versions of cloud-based applications
CN116450107B (en) Method and device for secondary development of software by low-code platform and electronic equipment
CN114791884A (en) Test environment construction method and device, storage medium and electronic equipment
CN113377415A (en) Application publishing method and device
Sangapu et al. The Definitive Guide to Modernizing Applications on Google Cloud: The what, why, and how of application modernization on Google Cloud
CN113628678B (en) High-flux virtual drug screening method and system based on spark computing engine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination