CN113239060A - Data resource allocation processing method, device, equipment and storage medium - Google Patents

Data resource allocation processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN113239060A
CN113239060A CN202110598937.3A CN202110598937A CN113239060A CN 113239060 A CN113239060 A CN 113239060A CN 202110598937 A CN202110598937 A CN 202110598937A CN 113239060 A CN113239060 A CN 113239060A
Authority
CN
China
Prior art keywords
data
target
resource
task
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110598937.3A
Other languages
Chinese (zh)
Other versions
CN113239060B (en
Inventor
李博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kangjian Information Technology Shenzhen Co Ltd
Original Assignee
Kangjian Information Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kangjian Information Technology Shenzhen Co Ltd filed Critical Kangjian Information Technology Shenzhen Co Ltd
Priority to CN202110598937.3A priority Critical patent/CN113239060B/en
Publication of CN113239060A publication Critical patent/CN113239060A/en
Application granted granted Critical
Publication of CN113239060B publication Critical patent/CN113239060B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2365Ensuring data consistency and integrity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2282Tablespace storage structures; Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/242Query formulation
    • G06F16/2433Query languages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computer Security & Cryptography (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the technical field of big data, is applied to the field of intelligent medical treatment, and discloses a data resource allocation processing method, a device, equipment and a storage medium, which are used for improving the normalization of a data processing flow. The data resource allocation processing method comprises the following steps: establishing a resource queue mapping relation between the initial running resource and a preset distributed system to obtain a target running resource; allocating target operation resources to the target application; setting a target business process for a target application, setting a target data task for the target business process according to a preset authority strategy, and performing task sharing on the target data task; and performing real-time data mining processing on each preset data layer through a target data task, generating a target data table corresponding to each data layer in a working space corresponding to a target application, and performing data sharing on the target data table corresponding to each data layer. In addition, the invention also relates to a block chain technology, and the target data table can be stored in the block chain node.

Description

Data resource allocation processing method, device, equipment and storage medium
Technical Field
The present invention relates to the field of resource allocation of big data, and in particular, to a method, an apparatus, a device, and a storage medium for data resource allocation processing.
Background
At present, large data products in the market are numerous, and a real-time streaming processing framework is included for real-time data processing, and comprises a storm framework, a spark stream framework, a flash framework and the like. Most companies build real-time data processing platforms on these frameworks.
The existing real-time data processing platform has the capability of realizing resource isolation and resource management and control according to multiple tenants, but due to the fact that multiple data sources are butted, data processing flows are diversified, and when the real-time data processing platform builds a digital warehouse system layering, the problems of poor data processing flow normalization and poor data asset readability and reusability exist.
Disclosure of Invention
The invention provides a data resource allocation processing method, a data resource allocation processing device, data resource allocation processing equipment and a storage medium, which are used for improving the normative of a data processing flow and the readability and reusability of data assets.
In order to achieve the above object, a first aspect of the present invention provides a data resource allocation processing method, including: receiving a resource configuration request, generating an initial running resource according to the resource configuration request, and establishing a resource queue mapping relation between the initial running resource and a preset distributed system to obtain a target running resource, wherein the target running resource has a unique target tenant identification; receiving an application creation request, generating a target application according to the application creation request, and allocating the target running resource to the target application according to an application identifier and a target tenant identifier, wherein the target application has a unique application identifier and a unique working space, and the working space is used for indicating that storage definitions are respectively applied to each data table, each data task, each data component and each data function according to a document object mode; receiving a business process generation request, setting a target business process for the target application according to the business process generation request, setting a target data task for the target business process according to a preset authority strategy, and performing task sharing on the target data task when starting a data sharing task; and running the target data task based on the target running resource, performing real-time data mining processing on preset data layers through the target data task, generating target data tables corresponding to the data layers in a working space corresponding to the target application, and performing data sharing on the target data tables corresponding to the data layers when starting a data sharing task, wherein the target data tables corresponding to the data layers comprise a data source table, a dimension table and a data result table.
Optionally, in a first implementation manner of the first aspect of the present invention, the receiving a resource configuration request, generating an initial operating resource according to the resource configuration request, and establishing a resource queue mapping relationship between the initial operating resource and a preset distributed system to obtain a target operating resource, where the target operating resource has a unique target tenant identifier, includes: receiving a resource configuration request, and performing parameter analysis on the resource configuration request to obtain a target tenant identifier, resource information to be allocated and a queue identifier; querying preset resource configuration information according to the target tenant identification to obtain query data; if the query data is not null, configuring initial operating resources according to the information of the resources to be distributed and the query data; if the query data is null, dividing preset computing resources according to the information of the resources to be allocated to obtain initial operating resources, and mapping and storing the target tenant identification and the initial operating resources into a preset resource allocation data table; and establishing a resource queue mapping relation between the initial running resource and a preset distributed system according to the target tenant identification and the queue identification to obtain a target running resource, wherein the target running resource has a unique target tenant identification.
Optionally, in a second implementation manner of the first aspect of the present invention, if the query data is not a null value, configuring an initial operating resource according to the information of the resource to be allocated and the query data, where the configuring includes: if the query data is not null, judging whether the resource information to be distributed is consistent with the query data; if the resource information to be distributed is consistent with the query data, determining the query data as an initial operating resource; and if the resource information to be distributed is inconsistent with the query data, performing capacity expansion or capacity reduction processing according to the resource information to be distributed to obtain initial running resources, and updating the initial running resources into the preset resource distribution data table according to the target tenant identification.
Optionally, in a third implementation manner of the first aspect of the present invention, the establishing a resource queue mapping relationship between the initial operating resource and a preset distributed system according to the target tenant identity and the queue identity to obtain a target operating resource, where the target operating resource has a unique target tenant identity, includes: reading a preset queue configuration strategy according to the queue identification to obtain read data; when the read data is null, inquiring a preset queue creation rule according to the queue identification to obtain a queue generation instruction, and calling the queue generation instruction to create a target queue in a preset distributed system; and storing the initial running resource and the target queue into the preset queue configuration strategy in an associated manner according to the target tenant identification and the queue identification to obtain a target running resource, wherein the target running resource has a unique target tenant identification.
Optionally, in a fourth implementation manner of the first aspect of the present invention, the receiving an application creation request, generating a target application according to the application creation request, and allocating the target running resource to the target application according to an application identifier and the target tenant identifier, where the target application has a unique application identifier and a unique workspace, and the workspace is used to indicate that storage definitions are respectively defined for each data table, each data task, each data component, and each data function according to a document object manner, and the method includes: receiving an application creation request, and analyzing the application creation request to obtain an application identifier and the target tenant identifier; judging whether a target application is created or not according to the application identifier, wherein the target application has a unique application identifier and a unique working space, and the working space is used for indicating that a data table, a data task, a data component and a data function are respectively stored and defined in a document object mode; if the target application is established, establishing a binding relationship between the target application and the target running resource according to the application identifier and the target tenant identifier; if the target application is not created, the target application is created according to the application identifier, the working space is distributed to the target application, and the target application and the target running resource are mapped and bound according to the application identifier and the target tenant identifier.
Optionally, in a fifth implementation manner of the first aspect of the present invention, the receiving a service flow generation request, setting a target service flow for the target application according to the service flow generation request, setting a target data task for the target service flow according to a preset permission policy, and performing task sharing on the target data task when starting a data sharing task, includes: receiving a service flow generation request, performing parameter analysis on the service flow generation request to obtain a service identifier, the application identifier and a data task identifier, and determining the target application according to the application identifier; when a target business process does not exist in the target application, inquiring a preset business process rule according to the business identifier to obtain a business process name, creating a target business process in the working space based on the business process name, and establishing a mapping relation between the target business process and the target application according to the business identifier and the application identifier; inquiring a preset data task creating rule according to the data task identifier to obtain a data task name, and retrieving a preset public space based on a preset authority strategy and the data task name to obtain a retrieval result; when the retrieval result is not a null value, determining that a data processing docking assembly exists in the preset public space, setting the data processing docking assembly as a target data task, and mapping and associating a file address corresponding to the target data task with the service identifier, wherein the target data task is a document object; when the retrieval result is a null value, generating a target data task based on the data task name, storing the target data task in a preset data task document, and setting a mapping relation between the target business process and the target data task according to the data task identifier and the business identifier, wherein the target data task is a document object; acquiring a sharing state code, if the sharing state code is a preset sharing value, starting a data sharing task, and when the data sharing task is started, packaging the target data task into a target component through a preset resource sharing security mechanism, and issuing the target component to the preset public space.
Optionally, in a sixth implementation manner of the first aspect of the present invention, the running the target data task based on the target running resource, performing real-time data mining on preset data layers by using the target data task, and generating a target data table corresponding to each data layer in a working space corresponding to the target application, and when starting a data sharing task, performing data sharing on the target data table corresponding to each data layer, where the target data table corresponding to each data layer includes a data source table, a dimension table, and a data result table, includes: calling a preset data tool based on the target running resource, analyzing the target data task through the preset data tool, and performing real-time data mining processing on each preset data layer to obtain mining data corresponding to each data layer, wherein the preset data tool comprises a preset data component and a preset data function; under the working space corresponding to the target application, generating a target data table corresponding to each data layer based on the mining data corresponding to each data layer, wherein the target data table corresponding to each data layer comprises a data source table, a dimension table and a data result table; the method comprises the steps of obtaining a sharing state code, starting a data sharing task if the sharing state code is a preset sharing value, issuing a target data table corresponding to each data layer to a preset public database when the data sharing task is started, and authorizing the target data table corresponding to each data layer according to a preset security level.
A second aspect of the present invention provides a data resource allocation processing apparatus, including: the generation module is used for receiving a resource configuration request, generating an initial running resource according to the resource configuration request, and establishing a resource queue mapping relation between the initial running resource and a preset distributed system to obtain a target running resource, wherein the target running resource has a unique target tenant identification; the allocation module is used for receiving an application creation request, generating a target application according to the application creation request, and allocating the target running resource to the target application according to an application identifier and a target tenant identifier, wherein the target application has a unique application identifier and a unique working space, and the working space is used for indicating storage definitions of each data table, each data task, each data component and each data function according to a document object mode; the setting module is used for receiving a business process generation request, setting a target business process for the target application according to the business process generation request, setting a target data task for the target business process according to a preset authority strategy, and performing task sharing on the target data task when a data sharing task is started; and the processing module is used for running the target data task based on the target running resource, performing real-time data mining processing on preset data layers through the target data task, generating target data tables corresponding to the data layers under the working space corresponding to the target application, and performing data sharing on the target data tables corresponding to the data layers when starting a data sharing task, wherein the target data tables corresponding to the data layers comprise a data source table, a dimension table and a data result table.
Optionally, in a first implementation manner of the second aspect of the present invention, the generating module includes: the analysis unit is used for receiving the resource configuration request and carrying out parameter analysis on the resource configuration request to obtain a target tenant identifier, resource information to be distributed and a queue identifier; the query unit is used for querying preset resource configuration information according to the target tenant identification to obtain query data; a configuration unit, configured to configure an initial operating resource according to the resource information to be allocated and the query data if the query data is not a null value; the dividing unit is used for dividing preset computing resources according to the resource information to be allocated to obtain initial operating resources if the query data is null, and mapping and storing the target tenant identification and the initial operating resources into a preset resource allocation data table; and the establishing unit is used for establishing a resource queue mapping relation between the initial running resource and a preset distributed system according to the target tenant identification and the queue identification to obtain a target running resource, and the target running resource has a unique target tenant identification.
Optionally, in a second implementation manner of the second aspect of the present invention, the configuration unit is specifically configured to: if the query data is not null, judging whether the resource information to be distributed is consistent with the query data; if the resource information to be distributed is consistent with the query data, determining the query data as an initial operating resource; and if the resource information to be distributed is inconsistent with the query data, performing capacity expansion or capacity reduction processing according to the resource information to be distributed to obtain initial running resources, and updating the initial running resources into the preset resource distribution data table according to the target tenant identification.
Optionally, in a third implementation manner of the second aspect of the present invention, the establishing unit is specifically configured to: reading a preset queue configuration strategy according to the queue identification to obtain read data; when the read data is null, inquiring a preset queue creation rule according to the queue identification to obtain a queue generation instruction, and calling the queue generation instruction to create a target queue in a preset distributed system; and storing the initial running resource and the target queue into the preset queue configuration strategy in an associated manner according to the target tenant identification and the queue identification to obtain a target running resource, wherein the target running resource has a unique target tenant identification.
Optionally, in a fourth implementation manner of the second aspect of the present invention, the allocating module is specifically configured to: receiving an application creation request, and analyzing the application creation request to obtain an application identifier and the target tenant identifier; judging whether a target application is created or not according to the application identifier, wherein the target application has a unique application identifier and a unique working space, and the working space is used for indicating that a data table, a data task, a data component and a data function are respectively stored and defined in a document object mode; if the target application is established, establishing a binding relationship between the target application and the target running resource according to the application identifier and the target tenant identifier; if the target application is not created, the target application is created according to the application identifier, the working space is distributed to the target application, and the target application and the target running resource are mapped and bound according to the application identifier and the target tenant identifier.
Optionally, in a fifth implementation manner of the second aspect of the present invention, the setting module is specifically configured to: receiving a service flow generation request, performing parameter analysis on the service flow generation request to obtain a service identifier, the application identifier and a data task identifier, and determining the target application according to the application identifier; when a target business process does not exist in the target application, inquiring a preset business process rule according to the business identifier to obtain a business process name, creating a target business process in the working space based on the business process name, and establishing a mapping relation between the target business process and the target application according to the business identifier and the application identifier; inquiring a preset data task creating rule according to the data task identifier to obtain a data task name, and retrieving a preset public space based on a preset authority strategy and the data task name to obtain a retrieval result; when the retrieval result is not a null value, determining that a data processing docking assembly exists in the preset public space, setting the data processing docking assembly as a target data task, and mapping and associating a file address corresponding to the target data task with the service identifier, wherein the target data task is a document object; when the retrieval result is a null value, generating a target data task based on the data task name, storing the target data task in a preset data task document, and setting a mapping relation between the target business process and the target data task according to the data task identifier and the business identifier, wherein the target data task is a document object; acquiring a sharing state code, if the sharing state code is a preset sharing value, starting a data sharing task, and when the data sharing task is started, packaging the target data task into a target component through a preset resource sharing security mechanism, and issuing the target component to the preset public space.
Optionally, in a sixth implementation manner of the second aspect of the present invention, the processing module is specifically configured to: calling a preset data tool based on the target running resource, analyzing the target data task through the preset data tool, and performing real-time data mining processing on each preset data layer to obtain mining data corresponding to each data layer, wherein the preset data tool comprises a preset data component and a preset data function; under the working space corresponding to the target application, generating a target data table corresponding to each data layer based on the mining data corresponding to each data layer, wherein the target data table corresponding to each data layer comprises a data source table, a dimension table and a data result table; the method comprises the steps of obtaining a sharing state code, starting a data sharing task if the sharing state code is a preset sharing value, issuing a target data table corresponding to each data layer to a preset public database when the data sharing task is started, and authorizing the target data table corresponding to each data layer according to a preset security level.
A third aspect of the present invention provides a data resource allocation processing apparatus, including: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the data resource allocation processing device to execute the data resource allocation processing method described above.
A fourth aspect of the present invention provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to execute the above-described data resource allocation processing method.
In the technical scheme provided by the invention, a resource configuration request is received, an initial running resource is generated according to the resource configuration request, a resource queue mapping relation is established between the initial running resource and a preset distributed system, and a target running resource is obtained, wherein the target running resource has a unique target tenant identification; receiving an application creation request, generating a target application according to the application creation request, and allocating the target running resource to the target application according to an application identifier and a target tenant identifier, wherein the target application has a unique application identifier and a unique working space, and the working space is used for indicating that storage definitions are respectively applied to each data table, each data task, each data component and each data function according to a document object mode; receiving a business process generation request, setting a target business process for the target application according to the business process generation request, setting a target data task for the target business process according to a preset authority strategy, and performing task sharing on the target data task when starting a data sharing task; and running the target data task based on the target running resource, performing real-time data mining processing on preset data layers through the target data task, generating target data tables corresponding to the data layers in a working space corresponding to the target application, and performing data sharing on the target data tables corresponding to the data layers when starting a data sharing task, wherein the target data tables corresponding to the data layers comprise a data source table, a dimension table and a data result table. In the embodiment of the invention, a resource queue mapping relation is established between the initial running resource and a preset distributed system to obtain a target running resource; allocating a target running resource and a target business process to the target application, setting a target data task for the target business process according to a preset authority strategy, and sharing the target data task; and performing real-time data mining processing on each preset data layer through a target data task, and generating and sharing a target data table corresponding to each data layer in a working space corresponding to a target application. The normalization of the data processing flow and the readability and reusability of the data assets are improved.
Drawings
Fig. 1 is a schematic diagram of an embodiment of a data resource allocation processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of another embodiment of a data resource allocation processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an embodiment of a data resource allocation processing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic diagram of another embodiment of a data resource allocation processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an embodiment of a data resource allocation processing device in an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a data resource allocation processing method, a data resource allocation processing device, data resource allocation processing equipment and a storage medium, which are used for improving the normative of a data processing flow and the readability and reusability of data assets.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," or "having," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
For convenience of understanding, a detailed flow of an embodiment of the present invention is described below, and referring to fig. 1, an embodiment of a data resource allocation processing method according to an embodiment of the present invention includes:
101. receiving a resource configuration request, generating an initial running resource according to the resource configuration request, and establishing a resource queue mapping relation between the initial running resource and a preset distributed system to obtain a target running resource, wherein the target running resource has a unique target tenant identification.
The preset distributed system includes a plurality of distributed data fragment clusters, the data fragment clusters are also calculated for yarn, each yarn cluster includes a plurality of yarn queues, the number of initial operating resources may be one or more, and the specific description is not limited herein. For example, when the number of the initial running resources is one, the server establishes the initial running resources named as the wind control special resources, and maps the wind control special resources to a preset yann queue default queue in a big data yann cluster to obtain the target running resources. When the number of the initial running resources is multiple, the server establishes the initial running resources as a universal resource pool, and establishes a resource queue mapping relation between the universal resource pool and a yann queue default queue in a universal yann cluster (a preset distributed system) to obtain a first running resource; the server establishes that the initial running resource is a universal wind control resource, and establishes a resource queue mapping relation between the universal wind control resource and a queue risk queue on a universal yard cluster to obtain a second running resource; the server establishes a target running resource as a wind control special resource, and establishes a resource queue mapping relationship between the wind control special resource and a yann queue default queue in a wind control yann cluster to obtain a third running resource, wherein the first running resource, the second running resource and the third running resource are the target running resources with the established resource queue mapping relationship, and each running resource (namely, the target running resource) has a unique target tenant identification. The default queue associated with the general resource pool and the risk queue associated with the general wind control resource belong to the same cluster (general yarn cluster), the default queue and the risk queue can realize isolation of computing resources, the cluster corresponding to the default queue associated with the wind control special resource is different from the general resource pool and the general wind control resource, and the resources are isolated. The tenant resource isolation and resource management and control capacity in the platform is improved.
It is to be understood that the executing entity of the present invention may be a data resource allocation processing device, and may also be a terminal or a server, which is not limited herein. The embodiment of the present invention is described by taking a server as an execution subject.
102. Receiving an application creation request, generating a target application according to the application creation request, and allocating target running resources to the target application according to an application identifier and a target tenant identifier, wherein the target application has a unique application identifier and a unique working space, and the working space is used for indicating storage definitions of each data table, each data task, each data component and each data function according to a document object mode.
The number of the target applications may be 1, or may be multiple, and is not limited herein. The target application is used for providing services such as online medical inquiry and registration, and is not limited herein. The server interacts with the user through the target application. Specifically, the server receives an application creation request; the server analyzes the application creation request to obtain an analysis parameter, and performs parameter verification on the analysis parameter to obtain an application identifier; the server generates a target application according to the application identifier; and the server binds the application identifier and the target tenant identifier to realize the allocation of target running resources to the target application. For example, a target wind control business department detects an abnormal transaction behavior of a user, and after a server establishes a wind control application (i.e., a target application), the server allocates an independent working space risk, which may include development components such as data tasks, data tables, and functions.
103. Receiving a business process generation request, setting a target business process for a target application according to the business process generation request, setting a target data task for the target business process according to a preset authority strategy, and performing task sharing on the target data task when starting a data sharing task.
The number of the target business processes may be one or more, and is not limited herein; the number of the target data tasks may be one or more, and is not limited herein. The preset authority policy is used for indicating whether authority information of a data task read from a preset public space exists. The target data task includes at least one SQL statement constructed based on the structured query language SQL. Specifically, the server receives a service flow setting request, analyzes the service flow setting request to obtain a plurality of parameters, and verifies the plurality of parameters to obtain a verification result, wherein the plurality of parameters comprise a service identifier, an application identifier, a data task identifier and a shared state code; when the verification result is that the verification is successful, the server establishes a target service flow according to the service identifier and performs mapping association on the service identifier and the application identifier, for example, in order to detect abnormal transaction behaviors of the user, the server establishes the target service flow named as user behaviors in the application A; the server sets a target data task for a target business process according to a preset authority strategy and a business identifier and a data task identifier, for example, the server establishes a target data task named transaction behavior detection under the target business process of user behavior, and the target data task can be sourced from a preset public space. Wherein the target business process is used to indicate a combination of data tasks having the same business attributes. Each data processing task (including a target data task) in the target application runs in a big data cluster (belonging to a preset distribution system) through a target running resource.
104. And when the data sharing task is started, data sharing is carried out on the target data tables corresponding to the data layers, and the target data tables corresponding to the data layers comprise a data source table, a dimension table and a data result table.
The preset data layers comprise a data source message layer ODS, a middle detail layer DWD, a middle same-base layer DWS, a data application layer ADS and a data dimension surface layer DIM. The content of the target data table corresponding to each data layer is used to indicate table field definition data, and the target data table corresponding to each data layer may include a data source table, a dimension table, and a data result table, and may also include other types of data tables, which is not limited herein. The server allocates storage resources through the target data table, the target data table of each data layer can map different storage types, for example, the dimension table can be a remote dictionary service redis table, or a relational database mysql table, and the data result table can be a column storage database clickhouse table, or a mysql table, or a reids table. The target data task may process multiple data tables (i.e., pre-set data sources) and generate one or more new data tables (i.e., target data tables). Each application may build a data table under a respective workspace. Since different applications may process the same data table, the server supports the application to issue the target data table corresponding to each data layer to a preset public database. All applications can read the data table in the public database and apply for authorization for use. For example: a public transaction repository, named dw _ order, to which each application can publish transaction-related target data tables. The standardization of the data processing flow and the readability and reusability of the data assets are improved.
Further, the server stores the target data table in the blockchain database, which is not limited herein.
In the embodiment of the invention, a resource queue mapping relation is established between the initial running resource and a preset distributed system to obtain a target running resource; allocating a target running resource and a target business process to the target application, setting a target data task for the target business process according to a preset authority strategy, and sharing the target data task; and performing real-time data mining processing on each preset data layer through a target data task, and generating and sharing a target data table corresponding to each data layer in a working space corresponding to a target application. The normalization of the data processing flow and the readability and reusability of the data assets are improved. This scheme can be applied to in the wisdom medical field to promote the construction in wisdom city.
Referring to fig. 2, another embodiment of a data resource allocation processing method according to the embodiment of the present invention includes:
201. receiving a resource configuration request, generating an initial running resource according to the resource configuration request, and establishing a resource queue mapping relation between the initial running resource and a preset distributed system to obtain a target running resource, wherein the target running resource has a unique target tenant identification.
The target operating resource is used to indicate an initial operating resource that has a one-to-one correspondence relationship with a target queue in a preset distributed system, the target operating resource may be a tenant using a physical server cluster, and operating resources (e.g., cpus and memories) between the tenants are isolated from each other.
Optionally, the server receives the resource configuration request, and performs parameter analysis on the resource configuration request to obtain the target tenant identifier, the resource information to be allocated, and the queue identifier. Then, the server queries preset resource configuration information according to the target tenant identification to obtain query data; if the query data is not null, the server configures initial operating resources according to the resource information to be allocated and the query data, that is, if the query data is not null, the server compares the resource information to be allocated and the query data according to a preset resource type (for example, memory resources and the like) to obtain a comparison result, and configures the initial operating resources according to the comparison result. Further, if the query data is not null, the server judges whether the resource information to be allocated is consistent with the query data; if the resource information to be distributed is consistent with the query data, the server determines the query data as an initial operating resource; and if the resource information to be distributed is inconsistent with the query data, the server performs capacity expansion or capacity reduction processing according to the resource information to be distributed to obtain initial running resources, and updates the initial running resources into a preset resource distribution data table according to the target tenant identification. Then, if the query data is null, the server divides preset computing resources according to the information of the resources to be allocated to obtain initial operating resources, and the target tenant identification and the initial operating resources are mapped and stored into a preset resource allocation data table; the preset computing resources comprise CPU resources, memory resources, hard disk resources, network resources and the like. Further, the server can also judge whether the resource information to be distributed is a preset resource limiting condition; when the information of the resources to be distributed meets the preset resource limiting conditions, the server presets and divides preset computing resources according to the information of the resources to be distributed to obtain initial operating resources; and when the information of the resources to be distributed does not meet the preset resource limiting conditions, the server generates and sends prompt information, wherein the prompt information is used for indicating that the information of the resources to be distributed exceeds the preset resource limiting conditions. The preset resource limit condition is used for indicating the maximum computing resource allocation upper limit condition set when the server allocates the operating resources. And finally, the server establishes a resource queue mapping relation between the initial running resource and a preset distributed system according to the target tenant identification and the queue identification to obtain a target running resource, wherein the target running resource has a unique target tenant identification. That is, the server maps and associates the target tenant identifier and the queue identifier, and accordingly, the initial operating resource and the preset distributed system establish a resource queue mapping relationship. The target tenant identity and the queue identity may be numeric values or character strings, and are not limited herein. Further, the server reads a preset queue configuration strategy according to the queue identification to obtain read data; when the read data is null, the server queries a preset queue creation rule according to the queue identification to obtain a queue generation instruction, and calls the queue generation instruction to create a target queue in a preset distributed system; and the server stores the initial running resource and the target queue into a preset queue configuration strategy in an associated manner according to the target tenant identification and the queue identification to obtain a target running resource, wherein the target running resource has a unique target tenant identification.
202. Receiving an application creation request, generating a target application according to the application creation request, and allocating target running resources to the target application according to an application identifier and the target tenant identifier, wherein the target application has a unique application identifier and a unique working space, and the working space is used for indicating storage definitions of each data table, each data task, each data component and each data function according to a document object mode.
It should be noted that, in order to better interface external big data components, the server abstracts components (e.g., data tables, data tasks, data components, and data functions) required in the data development process into document objects, which facilitates component expansion and increases flexibility. Document objects may be stored in different taxonomy folders for management. The document object contains content and configuration information. The server can be combined into different big data research and development components through different contents and configurations, and stability and expandability are improved. Meanwhile, the terminal can automatically load different types of document editors and the like through the identification of different document contents. For example, a terminal loads a table editor for data tables, and loads a structured query language editor for data tasks, etc.
Optionally, the server receives the application creation request, and parses the application creation request to obtain the application identifier and the target tenant identifier, and further, the server may also verify whether the target tenant identifier exists, so as to ensure that the target operating resource is created and is not allocated; the method comprises the steps that a server judges whether a target application is established or not according to an application identifier, the target application has a unique application identifier and a unique working space, the working space is used for indicating storage definitions of a data table, a data task, a data assembly and a data function respectively according to a document object mode, and specifically, the server retrieves a preset application information table according to the application identifier to obtain a retrieval result; when the retrieval result is the preset application name, the server determines that the target application is established; when the retrieval result is not the preset application name, the server determines that the target application is not created; if the target application is created, the server establishes a binding relationship between the target application and the target running resource according to the application identifier and the target tenant identifier, and it needs to be stated that the target application allocates a corresponding running resource (i.e., the target running resource) and can submit the data task to the running resource; and if the target application is not created, the server creates the target application according to the application identifier, allocates a working space to the target application, and maps and binds the target application and the target running resource according to the application identifier and the target tenant identifier. For example, the terminal generates an application 1 according to an application identifier app _001, generates an application 2 according to an application identifier app _002, the server allocates workspaces to the application 1 and the application 2 as/var/www/html/app _001 and/var/www/html/app _002, and the server allocates the target running resource universal resource pool to the application 1 and the application 2 respectively. Therefore, both the application 1 and the application 2 can apply for computing resources such as a CPU and a memory from the universal resource pool to run the respective corresponding data tasks.
203. Receiving a business process generation request, setting a target business process for a target application according to the business process generation request, setting a target data task for the target business process according to a preset authority strategy, and performing task sharing on the target data task when starting a data sharing task.
The preset resource sharing security mechanism is used for indicating that the data task is issued to a preset public space, performing authorization operation (for example, setting a read-write execution permission) on the data task, and providing security guarantee so as to ensure the security, stability and reliability of information sharing. It will be appreciated that during data task processing, there are some data tasks that have the same processing logic. There are only some parameter differences between these data tasks, and the server publishes the target data task to a preset common space in a component manner (i.e., target component). All other data processing components can be provided for all applications by being published to a common space, or can be read by all applications, but the data processing components are used after a part of the applications need to pass a permission application, and the details are not limited herein.
Optionally, the server receives the service flow generation request, performs parameter analysis on the service flow generation request, obtains a service identifier, an application identifier, and a data task identifier, and determines the target application according to the application identifier. When a target business process does not exist in the target application, the server inquires a preset business process rule according to the business identifier to obtain a business process name, creates a target business process in a working space based on the business process name, and establishes a mapping relation between the target business process and the target application according to the business identifier and the application identifier; the server inquires a preset data task creating rule according to the data task identifier to obtain a data task name, and the server retrieves a preset public space based on a preset authority strategy and the data task name to obtain a retrieval result; when the retrieval result is not a null value, the server determines that a data processing butt joint component exists in a preset public space, the server sets the data processing butt joint component as a target data task, and mapping association is carried out on a file address and a service identifier corresponding to the target data task, wherein the target data task is a document object; and when the retrieval result is a null value, the server generates a target data task based on the data task name, stores the target data task in a preset data task document, sets a mapping relation for a target service flow and the target data task according to the data task identifier and the service identifier, and the target data task is a document object, wherein a preset data task rule is used for indicating the configuration relation between the data task identifier and the data task name. The server acquires the sharing state code, if the sharing state code is a preset sharing value, the server starts a data sharing task, and when the data sharing task is started, the server packs the target data task into a target component through a preset resource sharing safety mechanism and issues the target component to a preset public space.
204. And calling a preset data tool based on the target running resource, analyzing the target data task through the preset data tool, and performing real-time data mining processing on each preset data layer to obtain mining data corresponding to each data layer, wherein the preset data tool comprises a preset data assembly and a preset data function.
The document contents corresponding to the preset data component, the preset data function and the target data task can be in a JS object notation (JSON) data format, and the requirements of defining different types of objects are met by defining different attributes. The ODS in each preset data layer is a data source table, which is generally a data source for real-time platform docking, such as a service library message and a behavior log message. This spreadsheet is typically the first step in data processing; the DIM is a data dimension table, which is generally used for expanding data columns of static data needing to be associated in a real-time data processing process so as to perform next-layer processing, for example, a classification of a commodity is recorded in a commodity transaction information table, but the classification is a minimum classification of the commodity. But due to business requirements, the large classification information of the commodities needs to be statistically analyzed. The transaction information table then needs to use the minimum class ID to classify all the information associated with the class in the table. The classification table may be a dimension table for supplementing the real-time information. The DWD or DWS is a data detail layer and a data statistical layer respectively and generally stores intermediate tables generated in the real-time data processing process; the ADS is a data application layer and generally stores data result data.
The preset data layers are subjected to real-time data mining processing through a target data task, and data related to the data layers are authorized by a user and do not relate to user privacy.
205. And under the working space corresponding to the target application, generating a target data table corresponding to each data layer based on the mining data corresponding to each data layer, wherein the target data table corresponding to each data layer comprises a data source table, a dimension table and a data result table.
It should be noted that the physical storage supportable by each data layer is differentiated. For example, the ODS layer typically supports message queues, such as kaffka; the DIM layer generally supports key-value pair storage, or java database is connected with JDBC, such as redis, mysql and the like; DWD, DWS, and ADS are only logically distinct, with different data layers supporting different storage. The target data task generates an association relationship between target data tables corresponding to each data layer through an SQL statement, for example, a server counts uv of each category of each commodity, creates a user behavior table ods _ user _ act (i.e. a target data table) on an OSD layer, includes a field user representation userId and a commodity identification itemId, creates a commodity classification table (i.e. a target data table) DIM _ item _ category on a DIM layer, includes a field commodity identification itemId, a main classification mainCategory and a sub classification subcategory, and creates an ADS _ item _ main _ uv of a statistical result table (i.e. a target data table) on an ADS layer based on the ods _ user _ act and the DIM _ item _ category.
206. And acquiring a sharing state code, if the sharing state code is a preset sharing value, starting a data sharing task, when the data sharing task is started, issuing the target data tables corresponding to the data layers to a preset public database, and authorizing the target data tables corresponding to the data layers according to a preset security level.
It should be noted that the server manages and controls the storage resource through the data table, and performs different levels of authority control on the target data table according to different security policies. The server distinguishes the storage type of the target data table by dbType, for example, dbType ═ redis, dbType ═ hive. And the server queries the information corresponding to the target data table according to different dbTypes and completes corresponding authorization.
Further, the server can also monitor the target data task in real time. And the server checks the target data task in a task submitting link through the identification of the authority and the security level. And reminding the user of performing operations such as related authorization and the like on the target data task which fails to pass the inspection.
In the embodiment of the invention, a resource queue mapping relation is established between the initial running resource and a preset distributed system to obtain a target running resource; allocating a target running resource and a target business process to the target application, setting a target data task for the target business process according to a preset authority strategy, and sharing the target data task; and performing real-time data mining processing on each preset data layer through a target data task, and generating and sharing a target data table corresponding to each data layer in a working space corresponding to a target application. The normalization of the data processing flow and the readability and reusability of the data assets are improved. This scheme can be applied to in the wisdom medical field to promote the construction in wisdom city.
With reference to fig. 3, the data resource allocation processing apparatus in the embodiment of the present invention is described above, and an embodiment of the data resource allocation processing apparatus in the embodiment of the present invention includes:
a generating module 301, configured to receive a resource configuration request, generate an initial operating resource according to the resource configuration request, and establish a resource queue mapping relationship between the initial operating resource and a preset distributed system to obtain a target operating resource, where the target operating resource has a unique target tenant identifier;
the allocation module 302 is configured to receive an application creation request, generate a target application according to the application creation request, and allocate a target running resource to the target application according to an application identifier and the target tenant identifier, where the target application has a unique application identifier and a unique working space, and the working space is used to instruct storage definitions for each data table, each data task, each data component, and each data function according to a document object mode;
the setting module 303 is configured to receive a service flow generation request, set a target service flow for a target application according to the service flow generation request, set a target data task for the target service flow according to a preset permission policy, and perform task sharing on the target data task when starting a data sharing task;
the processing module 304 is configured to run a target data task based on a target running resource, perform real-time data mining processing on preset data layers through the target data task, generate a target data table corresponding to each data layer in a working space corresponding to a target application, and perform data sharing on the target data table corresponding to each data layer when starting a data sharing task, where the target data table corresponding to each data layer includes a data source table, a dimension table, and a data result table.
Further, the target data table is stored in the blockchain database, which is not limited herein.
In the embodiment of the invention, a resource queue mapping relation is established between the initial running resource and a preset distributed system to obtain a target running resource; allocating a target running resource and a target business process to the target application, setting a target data task for the target business process according to a preset authority strategy, and sharing the target data task; and performing real-time data mining processing on each preset data layer through a target data task, and generating and sharing a target data table corresponding to each data layer in a working space corresponding to a target application. The normalization of the data processing flow and the readability and reusability of the data assets are improved.
Referring to fig. 4, another embodiment of a data resource allocation processing apparatus according to the embodiment of the present invention includes:
a generating module 301, configured to receive a resource configuration request, generate an initial operating resource according to the resource configuration request, and establish a resource queue mapping relationship between the initial operating resource and a preset distributed system to obtain a target operating resource, where the target operating resource has a unique target tenant identifier;
the allocation module 302 is configured to receive an application creation request, generate a target application according to the application creation request, and allocate a target running resource to the target application according to an application identifier and the target tenant identifier, where the target application has a unique application identifier and a unique working space, and the working space is used to instruct storage definitions for each data table, each data task, each data component, and each data function according to a document object mode;
the setting module 303 is configured to receive a service flow generation request, set a target service flow for a target application according to the service flow generation request, set a target data task for the target service flow according to a preset permission policy, and perform task sharing on the target data task when starting a data sharing task;
the processing module 304 is configured to run a target data task based on a target running resource, perform real-time data mining processing on preset data layers through the target data task, generate a target data table corresponding to each data layer in a working space corresponding to a target application, and perform data sharing on the target data table corresponding to each data layer when starting a data sharing task, where the target data table corresponding to each data layer includes a data source table, a dimension table, and a data result table.
Optionally, the generating module 301 may further include:
the analyzing unit 3011 is configured to receive the resource configuration request, perform parameter analysis on the resource configuration request, and obtain a target tenant identifier, resource information to be allocated, and a queue identifier; the query unit 3012 is configured to query preset resource configuration information according to the target tenant identifier, so as to obtain query data; a configuration unit 3013, configured to configure an initial running resource according to the resource information to be allocated and the query data if the query data is not a null value; a dividing unit 3014, configured to divide a preset computing resource according to the resource information to be allocated if the query data is a null value, to obtain an initial operating resource, and map and store the target tenant identifier and the initial operating resource into a preset resource allocation data table; the establishing unit 3015 is configured to establish a resource queue mapping relationship between the initial operating resource and the preset distributed system according to the target tenant identifier and the queue identifier, so as to obtain a target operating resource, where the target operating resource has a unique target tenant identifier.
Optionally, the configuration unit 3013 may be further specifically configured to:
if the query data is not null, judging whether the resource information to be distributed is consistent with the query data; if the resource information to be distributed is consistent with the query data, determining the query data as an initial operating resource; and if the resource information to be distributed is inconsistent with the query data, performing capacity expansion or capacity reduction processing according to the resource information to be distributed to obtain initial running resources, and updating the initial running resources into a preset resource distribution data table according to the target tenant identification.
Optionally, the establishing unit 3015 may be further specifically configured to:
reading a preset queue configuration strategy according to the queue identification to obtain read data; when the read data is null, inquiring a preset queue creation rule according to the queue identification to obtain a queue generation instruction, and calling the queue generation instruction to create a target queue in a preset distributed system; and storing the initial running resource and the target queue into a preset queue configuration strategy in an associated manner according to the target tenant identification and the queue identification to obtain a target running resource, wherein the target running resource has a unique target tenant identification.
Optionally, the allocating module 302 may be further specifically configured to:
receiving an application creation request, and analyzing the application creation request to obtain an application identifier and a target tenant identifier; judging whether a target application is established or not according to the application identification, wherein the target application has a unique application identification and a unique working space, and the working space is used for indicating that the data table, the data task, the data component and the data function are respectively stored and defined in a document object mode; if the target application is established, establishing a binding relationship between the target application and the target running resource according to the application identifier and the target tenant identifier; and if the target application is not created, creating the target application according to the application identifier, allocating a working space to the target application, and mapping and binding the target application and the target running resource according to the application identifier and the target tenant identifier.
Optionally, the setting module 303 may be further specifically configured to:
receiving a business process generation request, performing parameter analysis on the business process generation request to obtain a business identifier, an application identifier and a data task identifier, and determining a target application according to the application identifier; when a target business process does not exist in the target application, inquiring a preset business process rule according to a business identifier to obtain a business process name, creating a target business process in a working space based on the business process name, and establishing a mapping relation between the target business process and the target application according to the business identifier and the application identifier; inquiring a preset data task creating rule according to the data task identifier to obtain a data task name, and retrieving a preset public space based on a preset authority strategy and the data task name to obtain a retrieval result; when the retrieval result is not a null value, determining that a data processing butt joint component exists in a preset public space, setting the data processing butt joint component as a target data task, and mapping and associating a file address and a service identifier corresponding to the target data task, wherein the target data task is a document object; when the retrieval result is a null value, generating a target data task based on the data task name, storing the target data task in a preset data task document, and setting a mapping relation between a target service flow and the target data task according to a data task identifier and a service identifier, wherein the target data task is a document object; the method comprises the steps of obtaining a sharing state code, starting a data sharing task if the sharing state code is a preset sharing value, packaging a target data task into a target component through a preset resource sharing safety mechanism when the data sharing task is started, and issuing the target component to a preset public space.
Optionally, the processing module 304 may be further specifically configured to:
calling a preset data tool based on the target running resource, analyzing a target data task through the preset data tool, and performing real-time data mining processing on each preset data layer to obtain mining data corresponding to each data layer, wherein the preset data tool comprises a preset data component and a preset data function; under a working space corresponding to a target application, generating a target data table corresponding to each data layer based on mining data corresponding to each data layer, wherein the target data table corresponding to each data layer comprises a data source table, a dimension table and a data result table; and acquiring a sharing state code, if the sharing state code is a preset sharing value, starting a data sharing task, when the data sharing task is started, issuing the target data tables corresponding to the data layers to a preset public database, and authorizing the target data tables corresponding to the data layers according to a preset security level.
In the embodiment of the invention, a resource queue mapping relation is established between the initial running resource and a preset distributed system to obtain a target running resource; allocating a target running resource and a target business process to the target application, setting a target data task for the target business process according to a preset authority strategy, and sharing the target data task; and performing real-time data mining processing on each preset data layer through a target data task, and generating and sharing a target data table corresponding to each data layer in a working space corresponding to a target application. The normalization of the data processing flow and the readability and reusability of the data assets are improved.
Fig. 3 and fig. 4 describe the data resource allocation processing apparatus in the embodiment of the present invention in detail from the perspective of modularization, and the data resource allocation processing apparatus in the embodiment of the present invention is described in detail from the perspective of hardware processing.
Fig. 5 is a schematic structural diagram of a data resource allocation processing apparatus 500 according to an embodiment of the present invention, where the data resource allocation processing apparatus 500 may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 510 (e.g., one or more processors) and a memory 520, and one or more storage media 530 (e.g., one or more mass storage devices) for storing applications 533 or data 532. Memory 520 and storage media 530 may be, among other things, transient or persistent storage. The program stored on the storage medium 530 may include one or more modules (not shown), each of which may include a series of instruction operations in the data resource allocation processing device 500. Further, the processor 510 may be configured to communicate with the storage medium 530, and execute a series of instruction operations in the storage medium 530 on the data resource allocation processing device 500.
The data resource allocation processing device 500 may also include one or more power supplies 540, one or more wired or wireless network interfaces 550, one or more input-output interfaces 560, and/or one or more operating systems 531, such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, etc. Those skilled in the art will appreciate that the data resource allocation processing device architecture shown in fig. 5 does not constitute a limitation of the data resource allocation processing device and may include more or fewer components than those shown, or some of the components may be combined, or a different arrangement of components.
The present invention also provides a computer-readable storage medium, which may be a non-volatile computer-readable storage medium, and which may also be a volatile computer-readable storage medium, having stored therein instructions, which, when run on a computer, cause the computer to perform the steps of the data resource allocation processing method.
The present invention also provides a data resource allocation processing device, which includes a memory and a processor, where the memory stores instructions, and the instructions, when executed by the processor, cause the processor to execute the steps of the data resource allocation processing method in the foregoing embodiments.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A data resource allocation processing method is characterized in that the data resource allocation processing method comprises the following steps:
receiving a resource configuration request, generating an initial running resource according to the resource configuration request, and establishing a resource queue mapping relation between the initial running resource and a preset distributed system to obtain a target running resource, wherein the target running resource has a unique target tenant identification;
receiving an application creation request, generating a target application according to the application creation request, and allocating the target running resource to the target application according to an application identifier and a target tenant identifier, wherein the target application has a unique application identifier and a unique working space, and the working space is used for indicating that storage definitions are respectively applied to each data table, each data task, each data component and each data function according to a document object mode;
receiving a business process generation request, setting a target business process for the target application according to the business process generation request, setting a target data task for the target business process according to a preset authority strategy, and performing task sharing on the target data task when starting a data sharing task;
and running the target data task based on the target running resource, performing real-time data mining processing on preset data layers through the target data task, generating target data tables corresponding to the data layers in a working space corresponding to the target application, and performing data sharing on the target data tables corresponding to the data layers when starting a data sharing task, wherein the target data tables corresponding to the data layers comprise a data source table, a dimension table and a data result table.
2. The data resource allocation processing method according to claim 1, wherein the receiving a resource allocation request, generating an initial operating resource according to the resource allocation request, and establishing a resource queue mapping relationship between the initial operating resource and a preset distributed system to obtain a target operating resource, wherein the target operating resource has a unique target tenant identifier, includes:
receiving a resource configuration request, and performing parameter analysis on the resource configuration request to obtain a target tenant identifier, resource information to be allocated and a queue identifier;
querying preset resource configuration information according to the target tenant identification to obtain query data;
if the query data is not null, configuring initial operating resources according to the information of the resources to be distributed and the query data;
if the query data is null, dividing preset computing resources according to the information of the resources to be allocated to obtain initial operating resources, and mapping and storing the target tenant identification and the initial operating resources into a preset resource allocation data table;
and establishing a resource queue mapping relation between the initial running resource and a preset distributed system according to the target tenant identification and the queue identification to obtain a target running resource, wherein the target running resource has a unique target tenant identification.
3. The data resource allocation processing method according to claim 2, wherein if the query data is not null, configuring an initial operating resource according to the information of the resource to be allocated and the query data, including:
if the query data is not null, judging whether the resource information to be distributed is consistent with the query data;
if the resource information to be distributed is consistent with the query data, determining the query data as an initial operating resource;
and if the resource information to be distributed is inconsistent with the query data, performing capacity expansion or capacity reduction processing according to the resource information to be distributed to obtain initial running resources, and updating the initial running resources into the preset resource distribution data table according to the target tenant identification.
4. The data resource allocation processing method according to claim 2, wherein the establishing a resource queue mapping relationship between the initial operating resource and a preset distributed system according to the target tenant identifier and the queue identifier to obtain a target operating resource, where the target operating resource has a unique target tenant identifier, includes:
reading a preset queue configuration strategy according to the queue identification to obtain read data;
when the read data is null, inquiring a preset queue creation rule according to the queue identification to obtain a queue generation instruction, and calling the queue generation instruction to create a target queue in a preset distributed system;
and storing the initial running resource and the target queue into the preset queue configuration strategy in an associated manner according to the target tenant identification and the queue identification to obtain a target running resource, wherein the target running resource has a unique target tenant identification.
5. The data resource allocation processing method according to claim 1, wherein the receiving an application creation request, generating a target application according to the application creation request, and allocating the target running resource to the target application according to an application identifier and the target tenant identifier, the target application having a unique application identifier and a unique workspace, the workspace being used to indicate that definitions are stored for data tables, data tasks, data components, and data functions respectively according to a document object manner, includes:
receiving an application creation request, and analyzing the application creation request to obtain an application identifier and the target tenant identifier;
judging whether a target application is created or not according to the application identifier, wherein the target application has a unique application identifier and a unique working space, and the working space is used for indicating that a data table, a data task, a data component and a data function are respectively stored and defined in a document object mode;
if the target application is established, establishing a binding relationship between the target application and the target running resource according to the application identifier and the target tenant identifier;
if the target application is not created, the target application is created according to the application identifier, the working space is distributed to the target application, and the target application and the target running resource are mapped and bound according to the application identifier and the target tenant identifier.
6. The data resource allocation processing method according to claim 1, wherein the receiving a service flow generation request, setting a target service flow for the target application according to the service flow generation request, setting a target data task for the target service flow according to a preset permission policy, and performing task sharing on the target data task when starting a data sharing task, includes:
receiving a service flow generation request, performing parameter analysis on the service flow generation request to obtain a service identifier, the application identifier and a data task identifier, and determining the target application according to the application identifier;
when a target business process does not exist in the target application, inquiring a preset business process rule according to the business identifier to obtain a business process name, creating a target business process in the working space based on the business process name, and establishing a mapping relation between the target business process and the target application according to the business identifier and the application identifier;
inquiring a preset data task creating rule according to the data task identifier to obtain a data task name, and retrieving a preset public space based on a preset authority strategy and the data task name to obtain a retrieval result;
when the retrieval result is not a null value, determining that a data processing docking assembly exists in the preset public space, setting the data processing docking assembly as a target data task, and mapping and associating a file address corresponding to the target data task with the service identifier, wherein the target data task is a document object;
when the retrieval result is a null value, generating a target data task based on the data task name, storing the target data task in a preset data task document, and setting a mapping relation between the target business process and the target data task according to the data task identifier and the business identifier, wherein the target data task is a document object;
acquiring a sharing state code, if the sharing state code is a preset sharing value, starting a data sharing task, and when the data sharing task is started, packaging the target data task into a target component through a preset resource sharing security mechanism, and issuing the target component to the preset public space.
7. The data resource allocation processing method according to any one of claims 1 to 6, wherein the running of the target data task based on the target running resource, the real-time data mining processing on each preset data layer by the target data task, and the generation of the target data table corresponding to each data layer in the working space corresponding to the target application, when starting the data sharing task, performs data sharing on the target data table corresponding to each data layer, where the target data table corresponding to each data layer includes a data source table, a dimension table, and a data result table, includes:
calling a preset data tool based on the target running resource, analyzing the target data task through the preset data tool, and performing real-time data mining processing on each preset data layer to obtain mining data corresponding to each data layer, wherein the preset data tool comprises a preset data component and a preset data function;
under the working space corresponding to the target application, generating a target data table corresponding to each data layer based on the mining data corresponding to each data layer, wherein the target data table corresponding to each data layer comprises a data source table, a dimension table and a data result table;
the method comprises the steps of obtaining a sharing state code, starting a data sharing task if the sharing state code is a preset sharing value, issuing a target data table corresponding to each data layer to a preset public database when the data sharing task is started, and authorizing the target data table corresponding to each data layer according to a preset security level.
8. A data resource allocation processing apparatus, characterized in that the data resource allocation processing apparatus comprises:
the generation module is used for receiving a resource configuration request, generating an initial running resource according to the resource configuration request, and establishing a resource queue mapping relation between the initial running resource and a preset distributed system to obtain a target running resource, wherein the target running resource has a unique target tenant identification;
the allocation module is used for receiving an application creation request, generating a target application according to the application creation request, and allocating the target running resource to the target application according to an application identifier and a target tenant identifier, wherein the target application has a unique application identifier and a unique working space, and the working space is used for indicating storage definitions of each data table, each data task, each data component and each data function according to a document object mode;
the setting module is used for receiving a business process generation request, setting a target business process for the target application according to the business process generation request, setting a target data task for the target business process according to a preset authority strategy, and performing task sharing on the target data task when a data sharing task is started;
and the processing module is used for running the target data task based on the target running resource, performing real-time data mining processing on preset data layers through the target data task, generating target data tables corresponding to the data layers under the working space corresponding to the target application, and performing data sharing on the target data tables corresponding to the data layers when starting a data sharing task, wherein the target data tables corresponding to the data layers comprise a data source table, a dimension table and a data result table.
9. A data resource allocation processing device, characterized in that the data resource allocation processing device comprises: a memory and at least one processor, the memory having instructions stored therein;
the at least one processor invoking the instructions in the memory to cause the data resource allocation processing device to perform the data resource allocation processing method of any one of claims 1-7.
10. A computer-readable storage medium having stored thereon instructions, which when executed by a processor, implement a data resource allocation processing method according to any one of claims 1 to 7.
CN202110598937.3A 2021-05-31 2021-05-31 Data resource allocation processing method, device, equipment and storage medium Active CN113239060B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110598937.3A CN113239060B (en) 2021-05-31 2021-05-31 Data resource allocation processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110598937.3A CN113239060B (en) 2021-05-31 2021-05-31 Data resource allocation processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113239060A true CN113239060A (en) 2021-08-10
CN113239060B CN113239060B (en) 2023-09-29

Family

ID=77136073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110598937.3A Active CN113239060B (en) 2021-05-31 2021-05-31 Data resource allocation processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113239060B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106575246A (en) * 2014-06-30 2017-04-19 亚马逊科技公司 Machine learning service
US20210034396A1 (en) * 2019-07-31 2021-02-04 Rubrik, Inc. Asynchronous input and output for snapshots of virtual machines
CN112527421A (en) * 2020-12-28 2021-03-19 平安普惠企业管理有限公司 Service calling method, device, equipment and storage medium
CN112615849A (en) * 2020-12-15 2021-04-06 平安科技(深圳)有限公司 Micro-service access method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106575246A (en) * 2014-06-30 2017-04-19 亚马逊科技公司 Machine learning service
US20210034396A1 (en) * 2019-07-31 2021-02-04 Rubrik, Inc. Asynchronous input and output for snapshots of virtual machines
CN112615849A (en) * 2020-12-15 2021-04-06 平安科技(深圳)有限公司 Micro-service access method, device, equipment and storage medium
CN112527421A (en) * 2020-12-28 2021-03-19 平安普惠企业管理有限公司 Service calling method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孙佳俊: "数据中心中多租户多目标的网络切片技术" *

Also Published As

Publication number Publication date
CN113239060B (en) 2023-09-29

Similar Documents

Publication Publication Date Title
US11698818B2 (en) Load balancing of machine learning algorithms
US9112777B1 (en) Tag-based resource configuration control
US20190384627A1 (en) Secure consensus-based endorsement for self-monitoring blockchain
WO2021073144A1 (en) Distributed file system monitoring method and device, terminal, and storage medium
JP5346010B2 (en) Policy management infrastructure
EP1309906B1 (en) Evidence-based security policy manager
US7870607B2 (en) Security and analysis system
US8271523B2 (en) Coordination server, data allocating method, and computer program product
EP3622449A1 (en) Autonomous logic modules
US7627662B2 (en) Transaction request processing system and method
CN111586147A (en) Node synchronization method, device, equipment and storage medium of block chain
CN105022628A (en) Extendable software application platform
CN111639309B (en) Data processing method and device, node equipment and storage medium
CN114218315A (en) Interface generation method and device, computer equipment and storage medium
CN114661319A (en) Software upgrade stability recommendation
CN109286617B (en) Data processing method and related equipment
CN106708897B (en) Data warehouse quality guarantee method, device and system
CN113239060B (en) Data resource allocation processing method, device, equipment and storage medium
US11048675B2 (en) Structured data enrichment
US20220366015A1 (en) Systems and methods for asset management
US20120166405A1 (en) Changeability And Transport Release Check Framework
US20200233870A1 (en) Systems and methods for linking metric data to resources
US7743008B2 (en) Adaptive management method with workflow control
CN110826993A (en) Project management processing method, device, storage medium and processor
CN113542387B (en) System release method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant