CN114896584B - Hive data authority control agent layer method and system - Google Patents

Hive data authority control agent layer method and system Download PDF

Info

Publication number
CN114896584B
CN114896584B CN202210818903.5A CN202210818903A CN114896584B CN 114896584 B CN114896584 B CN 114896584B CN 202210818903 A CN202210818903 A CN 202210818903A CN 114896584 B CN114896584 B CN 114896584B
Authority
CN
China
Prior art keywords
hql
authority
hive
data
field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210818903.5A
Other languages
Chinese (zh)
Other versions
CN114896584A (en
Inventor
卢薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Bizhi Technology Co ltd
Original Assignee
Hangzhou Bizhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Bizhi Technology Co ltd filed Critical Hangzhou Bizhi Technology Co ltd
Priority to CN202210818903.5A priority Critical patent/CN114896584B/en
Publication of CN114896584A publication Critical patent/CN114896584A/en
Application granted granted Critical
Publication of CN114896584B publication Critical patent/CN114896584B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/45Structures or tools for the administration of authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/182Distributed file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24553Query execution of query operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24573Query processing with adaptation to user needs using data annotations, e.g. user-defined metadata
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/252Integrating or interfacing systems involving database management systems between a Database Management System and a front-end application
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/44Program or device authentication

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Security & Cryptography (AREA)
  • Computational Linguistics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Library & Information Science (AREA)
  • Storage Device Security (AREA)

Abstract

The invention discloses a Hive data authority control agent layer method and a Hive data authority control agent layer system, which comprise the following steps: s1: hive data authority application; s2: resolving HQL; s3: HQL rewriting; s4: checking HQL authority; s5: HQL table, field blood margin analysis. The method and the system can meet the requirement of fine-grained data authority control. The method can meet the requirement of fine-grained data authority control in an enterprise, HQL analysis, rewriting and authority verification are realized on the basis of Hive prototypes, accuracy and stability are guaranteed to a certain extent, and the requirement of production on-line is met to a certain degree; the Java class Datacanthus authority and Datacanthus authority provided by the invention are both specifically realized for the Hive interface, and can realize data authentication at employee/user level; the method and the system do not need to invade a group big data cluster management platform during installation and deployment, and can meet the requirements of a large part of actual production scenes.

Description

Hive data authority control agent layer method and system
Technical Field
The invention relates to the technical field of network and data processing, in particular to a Hive data authority control agent layer method and system.
Background
With the wide application of technologies such as internet open source technology and cloud computing, basic technologies such as performance and expansion capability do not become the bottleneck of enterprise business development any more. The method has the advantages that data islands are opened, data intelligence is maximized, and data driving services become core competitiveness of future development of enterprises. Currently, with the rapid development of big data open source technology and communities, an enterprise builds a big data technology stack with a data center station using Hadoop as a core, such as HDFS (note: a distributed file storage system in a Hadoop ecology)/HBase (note: a non-relational database system based on column type storage in a Hadoop ecology) for distributed storage, hive (note: an offline query engine in a Hadoop ecology)/Spark (note: a distributed computing frame and an offline computing engine based on a memory in a Hadoop ecology) for offline data analysis, flink (note: a real-time computing engine capable of realizing stream and batch integration in a Hadoop ecology) for real-time data analysis, yarn (note: a universal resource management system in a Hadoop ecology) for resource management and distributed task scheduling, and Ranger/Sentry (note: two different data rights frames and different data rights management components in a Hadoop ecology) for managing and controlling a big data set. However, one problem that must be solved after data of different departments of an enterprise is imported into a data staging platform is the data authority control problem.
Meanwhile, in order to integrate resources, an enterprise usually has a special big Data and security department responsible for building and managing a big Data cluster management Platform represented by CDH (cloud's Distribution associating Apache Hadoop)/HDP (HortonWorks Data Platform, HDP)/EMR (EMR), and other departments apply for resources such as a Hive database, an HDFS file directory, and the like from the group big Data cluster management Platform by using the identity of one tenant. And the big data and security department allocates resource permission for the tenants through the Ranger/Sentry component carried by the CDH/HDP/EMR, so that data isolation among the tenants is realized. The data isolation among the tenants is a coarse-grained data authority control scheme, and cannot meet the requirement of controlling finer-grained data authority inside a department, for example, employee A serving as a core data developer has the operation authority of all tables, while employee B serving as a common visitor only has the viewing authority of a specific table. The main reason is that all employees in the department interact with the big data cluster through the same tenant, and the identity authentication and data authentication of Ranger and Sentry only aim at the tenant and cannot be detailed to different employees under the tenant.
Most of the Hive data authority schemes which are proposed at present are based on Ranger and Sentry. They create permission policies to Range Admin or create roles to Sentry and assign permissions to roles through external interfaces provided by Range and Sentry. In addition, a part of schemes are to carry out secondary transformation on the plug-ins of Ranger and Sentry to realize self-defined data authority control. Because both range and Sentry adopt a plug-in mode to perform data authority control on big data assemblies such as Hive and HDFS, that is, plug-ins need to be embedded into the big data assemblies, the schemes are all intrusive data authority control schemes. For the scenario that the big data cluster management platform is uniformly managed by a group, it is not allowed for a department to want to deploy the authority plug-in to the big data cluster management platform of the group by self-research authority plug-in. Therefore, in many real scenarios, the Hive data authority control proxy layer scheme which does not invade large data clusters becomes the only option for realizing fine-grained data authority control inside enterprise departments.
As described above, most of the Hive data permission schemes that have been proposed so far are plug-in schemes based on Ranger and Sentry. The authority policy management method comprises the steps of creating an authority policy to the Range Admin (note: the policy management center of Range) or creating roles to the Sentry through external interfaces provided by Range and Sentry and distributing authority to the roles, and then drawing the authority policy from the Range Admin and the Sentry by using the live plug-in which the Range and the Sentry are native or are secondarily modified to complete data authority authentication. Both of these solutions require the Hive plug-ins of Range and Sentry to be placed under the CDH/HDP/EMR Hive's dependency package directory and must be configured and restarted to be effective. On the one hand, the Hive plug-in can invade a hiveServer2 (note: the service end of Hive) to execute an HQL (or hiveQL, a SQL dialect provided by Hive) process, and the security of the Hive plug-in is not trusted for the group big data cluster management platform. This is why in many realistic scenarios, especially in scenarios where the big data cluster management platform is uniformly managed and maintained by a specific department of the group, and other departments share the usage, the Hive plug-ins provided by third parties are not allowed to be put into the big data cluster management platform. On the other hand, the Hive plug-in installation and update needs to configure and restart the Hive, which may cause interruption of Hive service, and further cause that the execution result of Hive task cannot be returned normally, thereby causing production accident. For the two reasons, the existing Hive data authority scheme is only suitable for departments to independently deploy and maintain CDH/HDP/EMR scenes, and the security guarantee, installation and update of Hive plug-ins are all responsible and undertaken by the departments. Therefore, the big data query engine Hive which is necessary for a data center to realize the data authority agent layer scheme has very important production value. The data authority agent layer is used for providing data authentication service by an agent outside the big data cluster management platform on the premise of not invading the big data cluster management platform. Aiming at the practical scene that most big data cluster management platforms are managed and controlled by a group in a centralized way, the invention provides a Hive data authority control agent layer scheme which can meet the requirement of controlling data authority with finer granularity in a department on the premise of not using any plug-in to invade the big data cluster management platforms of the group.
Disclosure of Invention
Aiming at the problems in the prior art, the invention aims to provide a Hive data authority control agent layer method and a Hive data authority control agent layer system, which can meet the requirement of finer-grained data authority control in departments on the premise of not using any plug-in to invade a group big data cluster management platform.
In order to achieve the above object, the present invention provides a Hive data right control agent layer method, which includes the following steps:
s1: hive data authority application; when the data authority approval service of a department approves the Hive data authority application of the employee, the data authority management center synchronously creates a corresponding data authority strategy, stores the corresponding data authority strategy in the table and field authority module and the row filtering and field desensitization module, and updates the mapping relation between the employee and the data authority strategy in the user authority management;
s2: resolving HQL; before HQL analysis, the Kerberos authentication of tenants needing to be carried out by utilizing Keytab of tenants interacting with a group big data cluster management platform by departments is needed, so that data isolation among the tenants is realized;
s3: HQL rewriting; in the process of analyzing the HQL, the SemanticAnalyzer performs line filtering and field desensitization rewriting on the HQL through a Hive TableMask object;
s4: checking HQL authority; based on QueryState, semanticAnalyzer and HQL, calling a static doAuthorization method of Driver to realize HQL data authority verification;
s5: HQL table, field blood margin analysis; and when the HQL passes the authentication, carrying out HQL table and field blood margin analysis.
Further, when the employee submits the Hive task, the identity information of the HQL and the employee is sent to a Hive data authentication agent unit together for authentication by an HQL submission module based on JDBC, and the HQL authentication process comprises HQL analysis, HQL rewriting, HQL authority verification, HQL table and field blood margin analysis; and if the HQL passes the authentication, submitting the rewritten HQL to a group big data cluster management platform by the JDBC-based HQL submitting module for execution.
Further, in step S2, after the kerberos authentication is passed, a HiveConf is created; hiveConf's creation relies on Hadoop and Hive configuration files provided by the clique big data cluster management platform, including core-site.xml, hdfs-site.xml, map-site.xml, yarn-site.xml, and Hive-site.xml.
Further, in step S2, the HQL parsing includes the following sub-steps:
s201, creating a Session State object by using HiveConf, and setting the userName of the Session State object as an employee account submitting HQL;
s202, starting a Session State object, setting a current database as a Hive database applied by a department on a big data cluster management platform, and initializing a transaction manager; after the sessionState object is created and started, the sessionState object is effective and unique, can communicate with Hadoop to submit distributed tasks, and can also be connected with a Hive metadata base to inquire metadata information;
s203, sequentially creating QueryState, context and ParseDriver objects; calling a parse method of the ParseDriver object to analyze the original HQL into abstract syntax tree nodes; generating a SemanticAnalyzer corresponding to QueryState and ASTNode by using a get method of the Hive SemanticAnalyzer factory;
s204, calling an analyze method of the SemanticAnalyzer to analyze the HQL.
Further, in step S3, the HQL rewriting process includes the following sub-steps:
s301, traversing and analyzing ASTNode of HQL to obtain table and field information;
s302, a table corresponding to the userName of the Sessionstate and a row filtering and field desensitization authority strategy of a field are pulled from a data authority management center through a DatablackHiveAuthorizer, and an applyRowFilterAndcolumnaging method is called so that a TableMask object can correctly acquire a row filtering and field desensitization expression;
s303, rewriting the Token stream of the original HQL according to the line filtering and field desensitization expression, and storing the latest Token stream of the HQL in a Context object.
Further, in step S4, if the HQL right is successfully verified, the method returns normally, otherwise, an abnormal right verification failure is thrown out; the authority checking bottom layer of Driver depends on a DatabeackHiveAuthorizer class; the DataBlackHiveAuthorizer is an implementation of the HiveAuthorizer interface of Hive, and realizes the methods of checkPrivileges, applyRowFilterAndcolumningMasking and needTransform; the doAuthorization of Driver analyzes HiveOperationType, hivePrivilegObject and authentication context of HQL, and then calls the checkPrivileges method of DatavaBlackHiveAuthorizer; the checkPrivileges method pulls a user authority policy to the data authority management center, analyzes tables and fields related in the input and output HiveprivileObject and operation types, and then matches the user authority policy; if all the input and output HivePrivilegObject objects pass the permission verification, the method returns normally, and otherwise, the permission verification failure exception is thrown out.
Further, in step S5, the HQL table and field blood relationship analysis specifically includes the following steps:
s501, based on HiveConf, queryState, semanticAnalyzer and HQL, creating QueryPlan and HookContext objects;
s502, calling a run method of Java ColumnLinearanalysis provided by the invention, and returning the table and field blood relationship in the HQL; the ColumnLineageAnalyzis is an inheritance class of LineageLogiger of Hive, a run method in the inheritance class is rewritten, and the table and field blood relationship of HQL can be returned;
and S503, the Hive data authentication agent sends the HQL authority verification result, the rewritten HQL, the table and field blood margin analysis result to an HQL submission module based on JDBC.
On the other hand, the invention also provides a Hive data authority control proxy layer system which is used for realizing the Hive data authority control proxy layer method.
Further, the system comprises a group big data cluster management platform, a database management server and a database management server, wherein the group big data cluster management platform is used for opening and registering tenants for each department using the platform, and configuring the Hive database and HDFS file directory authority for each tenant in advance through Ranger Admin; the HDFS plug-in also comprises a Hive plug-in and an HDFS plug-in which are embedded into the HiveServer 2; the system is used for periodically pulling the authority policy from the Ranger Admin and storing the authority policy in a local policy repository.
The system further comprises a data authority management center, wherein the data authority management center is provided with a table and field authority module, a row filtering and field desensitization module and a user authority management module; wherein, the table and field permissions define data permissions from metadata dimensions; line filtering and field desensitization define data permissions from data dimensions; and the Hive data authentication agent unit performs HQL analysis, HQL rewriting, HQL authority verification and HQL table and field blood margin analysis.
The invention has the following beneficial effects: 1) the architecture design of the technical scheme of the invention can meet the requirement of fine-grained data authority control in an enterprise, 2) the technical scheme of the invention is realized by realizing HQL analysis, rewriting and authority verification based on Hive prototypes, the accuracy and stability have certain guarantee, and the requirement of reaching production on-line is certain mastered, 3) Java class DatablackHiveAuthorizer and DatablackHiveAuthorizer provided by the invention are both the specific realization of Hive interfaces and can realize employee/user-level data authentication, 4) the installation and deployment of the invention do not need to invade a group large data cluster management platform, and can meet a large part of actual production scene requirements.
Drawings
FIG. 1 is a schematic diagram illustrating the architectural design of a Hive data right control agent layer method and system according to an embodiment of the invention;
FIG. 2 is a diagram illustrating a test table structure and data thereof according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a data permission policy table according to an embodiment of the present invention;
FIG. 4 illustrates an employee to data rights policy mapping intent in accordance with an embodiment of the present invention;
FIG. 5 illustrates a HQL authentication flow diagram in accordance with an embodiment of the present invention;
FIG. 6 is a diagram illustrating key configuration parameters of HQL security authentication, and blood margin analysis in an embodiment of the present invention;
FIG. 7 (a) is a comparison chart of HQL before and after rewriting according to the embodiment of the present invention, and (b) is a schematic diagram of query results;
FIG. 8 illustrates a diagram of an HQL authentication process in an embodiment in accordance with the invention;
FIG. 9 shows a table, field, blood margin analysis process diagram of HQL in an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplification of description, but do not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The following describes in detail a specific embodiment of the present invention with reference to fig. 1 to 9. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
The text refers to the definitions of the terms:
hadoop: the method is characterized by comprising the following steps of (1) narrowly referring to an open source distributed computing platform developed by an Apache foundation, and broadly referring to a big data component ecology taking 'Hadoop' as a core;
HDFS (Hadoop distributed File System): a distributed file storage system in a 'Hadoop' ecology;
HBase: a non-relational database system based on column-wise storage in a 'Hadoop' ecology;
hive: an offline query engine in the "Hadoop" ecology provides an SQL dialect called "HiveQL (HQL)" to store, query, and analyze large-scale data stored in Hadoop;
spark: a distributed computing framework and an off-line computing engine based on memory in a Hadoop ecology;
flink: one of Hadoop ecology can realize the real-time calculation engine of stream batch integration;
MapReduce: a most basic distributed computing framework and programming model in a 'Hadoop' ecology;
and (3) Yarn: a universal resource management system in a 'Hadoop' ecology;
CDH: a Hadoop stable release edition and a big data cluster management platform;
HDP (high density plasma) is as follows: an open source big data cluster management platform;
EMR: a Hadoop stable release edition and a big data cluster management platform;
range: a data authority management and control framework and a component based on an authority strategy in a Hadoop ecology are integrated by HDP and EMR;
sentry: a data authority control framework and a component based on roles in a Hadoop ecology are integrated by CDH;
ranger Admin: ranger's rights policy and component management center;
HiveServer2: hive provides a service for enabling a client to execute HQL;
kerberos: a computer network authorization protocol supported by Hadoop ecology is used for carrying out identity authentication on personal communication in a non-secure network by a secure means;
keytab: the Kerberos server issues identity authentication tickets to tenants;
JDBC: an application program interface used for standardizing a client program to access a database in Java language generally refers to Java database connection;
HiveConf: the Java class provided in the Hive source code is used for storing Hadoop and Hive Configuration in the memory and inherits the Configuration class Configuration of the Hadoop;
SessionState: java classes provided in Hive source code for creating a Hive session and maintaining the state in the session;
QueryState: the Java class provided in the Hive source code is used for maintaining state information of an HQL query, such as operation types corresponding to the HiveConf and the HQL;
ParseDriver: the Java class provided in the Hive source code can analyze one HQL into a corresponding abstract syntax tree node ASTNode by the parse method;
ASTNode: the Java class provided in Hive source code is used for representing an HQL abstract syntax tree;
semantic Analyzer factory: the method comprises the steps that Java classes are provided in Hive source codes, and a get method is provided, so that a corresponding parser SemanticAnalyzer can be generated according to ASTNode and QueryState;
SemanticAnalyzer: the Java class provided in the Hive source code, and the analyze method provided can perform semantic analysis and optimization on ASTNode, analyze metadata information such as tables and fields related therein, and perform line filtering and field desensitization rewriting on HQL;
context: providing a Context environment for a SemanticAnalyzer by using a Java class provided in the Hive source code, maintaining a Token stream in the HQL, and storing the Token stream of the HQL in a Context after the SemanticAnalyzer is rewritten;
tablespace: the Java class provided in the Hive source code rewrites the HQL through a line filtering and field desensitization expression acquired from a hiveAuthorzer, the rewritten HQL is stored in a Context, and a tableMask object is generated when a SemanticAnalyzer object is generated;
HiveAuthorzer: a Java interface provided in Hive source codes defines a series of unachieved methods, wherein a checkPrivileges method is used for user data authority verification, and a needTransform method is used for marking whether HQL rewriting and an applyRowFilterAndcolumnmasking method are used for obtaining a row filtering and field desensitization expression of a table;
datablack HiveAuthorzer: an implementation class of a HiveAuthorizer interface class in a Hive source code provides specific implementation of three methods, namely checkPrivileges, needTransform and applyRowFilterAndColumnScaking;
HiveAuthorzerFactory: the Java interface provided in the Hive source code defines an unrealized createHiveAuthorizer method, which is a factory class of the createHiveAuthorizer;
datablack haveauthorizer factory: an implementation class of the HiveAuthorizer interface class in the Hive source code realizes a createHiveAuthorizer method to generate an instance object of the datablackHiveAuthorizer;
driver: the Java class provided in the Hive source code is a Hive driver and is used for analyzing, compiling, optimizing and executing HQL, and a static doAuthorization method is provided to verify the user data authority;
HiveOperationType: a Java enumeration class provided in the Hive source code is used for representing the operation type of HQL, such as adding a database, querying a table, inserting a table, and the like;
HivePrivilegObject: the Java class provided in the Hive source code is used for representing an object to be authenticated, and records information such as a database, a table, a field, a partition, an operation type and the like;
QueryPlan: the Java class provided in the Hive source code is used for recording an input/output format and a query plan corresponding to an HQL;
HookContext: java classes provided in Hive source code, which are used for providing context environments for executing run methods for hook classes before and after HQL execution, such as QueryPlan and QueryState;
LineageLogger: the Java class provided in the Hive source code is a hook class after HQL execution, is used for analyzing the blood margin of the table and the field of the HQL and then is printed as a log;
columnlineage analysis: inheritance classes of the LineageLogger class in the Hive source code rewrite run methods of the LineageLogger class, and can output table and field blood relationship of HQL in a specific format.
Fig. 1 is a general technical scheme architecture design diagram of the fine-grained Hive data authority control agent layer method and system of the present invention. As shown in the lower part of fig. 1, in the system, a security center is responsible for managing and maintaining a group big data cluster management platform 100, opening and registering tenants for various departments using the platform, and configuring the use Hive database and HDFS file directory authority of each tenant in advance through a range Admin 120. The security center is located in the big data and security department 110 shown in fig. 1. The Hive plug-in 131 embedded in the hiveServer2 130 and the HDFS plug-in 141 of the HDFS 140 periodically pull the permission policy from the Ranger Admin 120 and store the permission policy in a local policy repository. The HQL parsing, compiling, optimizing and executing four stages are performed when the upper tenant submits the HQL to the HiveServer2 130. In the HQL compilation stage, the HiveServer2 130 can trigger the Hive plugin 131 to authenticate an input/output rights Object (Hive priority Object) parsed by HQL. The Hive plug-in 131 checks the permission objects with the permission policy of the local cache one by one, if a certain permission object is identified as not allowed, the HQL execution of the HiveServer2 is immediately interrupted, an authentication audit log is sent to the Ranger Admin 120, and tenant authentication failure details are returned. Only after Hive plugin 131 authenticates, the HiveServer2 130 submits a MapReduce task to the yann 150 to execute HQL. Since the database and table of Hive are stored in the HDFS 140, the HDFS plugin 141 performs data authentication on tenants that bypass the HiveServer2 130 to directly access the HDFS 140. Here, the Hive data permission policy is created by synchronously creating the corresponding HDFS 140 data permission policy.
As shown in the middle part of fig. 1, the department a data analysis platform 200 and the department B data analysis platform 300 interact with big data components on the group big data cluster management platform 100 after passing kerberos authentication by means of a tenant a.keytab and a tenant b.keytab, respectively. It should be noted that although the Ranger Admin 120 on the group big data cluster management platform 100 can manage tenant resource permissions and implement resource isolation between tenants, the data isolation between tenants is a coarse-grained data permission control scheme, and cannot meet the requirement of department a and department B on finer-grained data permissions of internal employees. To this end, the innovation and contribution of the present invention mainly focus on the technical implementation of the upper half of fig. 1 with respect to the data rights management center 400 and the Hive data authentication agent module 500.
According to the improved technical principle of the invention, in the data authority management center 400, the types of Hive data authorities are divided into two types of table and field authorities and row filtering and field desensitization, and the data authority management center 400 is correspondingly provided with a table and field authority module 410 and a row filtering and field desensitization module 420. Wherein, the table and field authority defines data authority from metadata dimension, such as table deleting, modifying, clearing and field inquiry authority; line filtering and field desensitization define data permissions from the data dimension. The user authority management module 430 is responsible for maintaining the mapping relationship between the department employees and the data authority policies. Specifically, in department a 200, employee A.b applies for Hive data authority to department a 200 through data authority approval service 210, after the data authority approval is passed, data authority management center 400 synchronously creates corresponding data authority policies, stores the data authority policies in table and field authority module 410 and row filtering and field desensitization module 420, and updates mapping relationship between employee and data authority policies in user authority management 430.
The Hive data authentication agent unit 500 is the core of the data authority agent layer scheme of the present invention, and is responsible for HQL rewriting 510, HQL authority verification 520, and providing HQL table, field blood margin analysis service 530. As shown in the upper part of fig. 1, after applying for the authority of the Hive table, the employee A.b 221 of the department a 200 submits the Hive task, and the JDBC-based HQL submission module 230 sends the identity information of the HQL and the employee A.b to the Hive data authentication agent unit 500 for authentication. First, the Hive data authentication agent unit 500 pulls a user authority policy to the data authority management center 400. Then, if a row filtering and field desensitization authority policy exists for employee A.b, hive data authentication agent 500 rewrites the HQL when the HQL is resolved, for example, adds a row filtering and field desensitization expression, thereby implementing row filtering and field desensitization. Similar to Hive plug-in of Ranger, after parsing out the input/output rights object of HQL, the Hive data authentication agent unit 500 checks the rights object with the locally cached rights policy item by item, and if a certain rights object is identified as not allowed, immediately notifies the JDBC-based HQL submission module 230 to abort HQL task submission and user authentication failure details, and sends an authentication audit log to the data rights management center 400. Finally, after the authentication is passed, the Hive data authentication agent unit 500 performs table and field blood margin analysis on the HQL, and sends the authentication result, the rewritten HQL, and the table and field blood margin information to the HQL submission module 230 based on JDBC. The JDBC-based HQL submission module 230 submits the rewritten HQL to the highserver 2 130 on the big data management platform 100 for execution.
As shown in fig. 1, the data authority management center 400 and the Hive data authentication agent unit 500 are both outside the corporate big data cluster management platform 100, and neither installation nor deployment thereof needs to invade the corporate big data cluster management platform 100. In addition, hive data authentication is performed before the HQL is submitted to the big data cluster management platform 100, so that the execution process of the HiveServer2 130 is not interfered, and the occurrence of security problems is avoided. Most importantly, the granularity of Hive data authentication can be refined to the staff in the department, and the requirement of finer-grained data authority control in the department is met. Therefore, the data right management center 400 and the Hive data authentication agent unit 500 are the data right control agent layer of the present invention.
The technical scheme of the fine-grained Hive data authority control agent layer method and the fine-grained Hive data authority control agent layer system is as follows, and the method comprises the following steps:
step S1: hive data authority application.
As shown in fig. 1, when the data authority approval service of the department approves the Hive data authority application of the employee, the data authority management center 400 synchronously creates a corresponding data authority policy, stores the policy in the table and field authority module 410 and the row filtering and field desensitization module 420, and updates the mapping relationship between the employee and the data authority policy in the user authority management 430.
In one embodiment, department A200 applies for the Hive database "Hive _ test" on big data cluster management platform 100, which has a Hive table named "ods _ tbl _ test" with the table structure and data as shown in FIG. 2. Employee A.b applies the view permission of table "ods _ tbl _ test" to data permission approval service 210 and sets the line filter and field desensitization condition. After approval, the data rights management center 400 stores data rights policies that are shown in FIG. 3, where there are three rights policies, 1) query the full field of the library "hive _ test" table "ods _ tbl _ test," 2) filter the row of the library "hive _ test" table "ods _ tbl _ test," and 3) desensitise the field of the library "hive _ test" table "ods _ tbl _ test. Accordingly, the mapping relationship between the employee and the data authority policy is shown in fig. 4. Employee a.a 222 has not yet applied for any authority, and employee A.b has a set of authority policies ID of {1,2,3}, corresponding to the three authority policies in fig. 3. When the employee submits the Hive task, the JDBC-based HQL submission module 230 sends the HQL and the identity information of the employee to the Hive data authentication agent unit 500 for authentication. The HQL authentication process includes the following four steps of HQL parsing, rewriting, authority check sum table, and field blood margin analysis, and the corresponding technical implementation flow is shown in fig. 5. HiveConf, session State, queryState, parseDriver, ASTNode, semanti Analyzer, driver, queryplan, hookContext in the technical implementation flow chart are all Java classes provided by Hive. The following steps S2-S5 will describe the HQL authentication procedure in fig. 5 in detail.
Step S2: and resolving HQL.
Before HQL analysis, keytos authentication is firstly carried out by utilizing a tenant Keytob (note: a Keytob file is an identity authentication bill issued to the tenant by a Keytos server) interacted between departments and the group big data cluster management platform 100, so that data isolation between tenants is realized. After the authentication is passed, a HiveConf is created. The creation of a HiveConf relies on Hadoop and Hive profiles provided by the big data cluster management platform 100. In addition, some key parameters must be set in the course of HiveConf creation to implement line filtering, field desensitization, and data authentication of HQL. These key parameters are detailed in fig. 6, where the HQL security authentication manager is a Java class provided in the present invention, and mainly functions to perform authority policy pulling and HQL data authentication, which is also the technical core of the present invention.
HQL resolution includes the following sub-steps:
s201, a Session State object is created by using HiveConf, and the userName of the Session State object is set as an employee account submitted with HQL.
S202, starting a Session State object, setting a current database as a Hive database applied by departments on a big data cluster management platform, for example, "Hive _ test" in the example of step S1, and initializing a transaction manager. After the sessionState object is created and started, the sessionState object can be effective and unique in the global scope, can communicate with Hadoop to submit distributed tasks, and can be connected with a Hive metadata base to inquire metadata information.
S203, sequentially creating QueryState, context and ParseDriver objects. And calling a parse method of the parseDriver object to analyze the original HQL into an Abstract Syntax Tree Node (ASTNode). And generating a SemanticAnalyzer corresponding to QueryState and ASTNode by using the get method of the SemanticAnalyzer factory of Hive.
S204, calling an analyze method of the SemanticAnalyzer to analyze the HQL.
And step S3: HQL overwrite.
During the process of resolving the HQL, the SemanticAnalyzer performs line filtering and field desensitization rewriting on the HQL through the Hive TableMask object. The HQL rewrite detail process includes the following sub-steps:
s301, traversing and analyzing ASTNode of HQL to obtain table and field information;
s302, a row filtering and field desensitization authority strategy of a table and a field corresponding to the userName of the Sessionstate is pulled from a data authority management center through the DatablackHiveAuthorizer, and an applyRowFilterAndcolumn method is called so that a tableMask object can correctly obtain a row filtering and field desensitization expression;
s303, rewriting the Token stream of the original HQL according to the line filtering and field desensitization expression, and storing the latest Token stream of the HQL in a Context object. For the example in step S1, assume that HQL submitted by employee A.b is "select id, private _ part, and note from ods _ tbl _ test", after pulling the authority policy in fig. 3, HQL before and after the tasskmask is rewritten is shown as (a) in fig. 7, and the result of the rewritten data query is shown as (b) in fig. 7. By comparing the table data of fig. 2 and (b) of fig. 7, it can be seen that the data has been row filtered and field desensitized according to the authority policy. Specifically, originally, there are 5 lines of records in FIG. 2 for tables _ tbl _ test. Table 1 in FIG. 3 and the field data authority policy give employee A.b full field query authority to table ods _ tbl _ test. Meanwhile, the line 2 filtering strategy in fig. 3 is that (a) in fig. 7 adds a filtering condition beginning with where to the HQL compared with the original HQL, and finally, the query result in (b) in fig. 7 only has the last 2 records in fig. 2, and the first 3 records are filtered when querying because the filtering condition is not met. In addition, the 3 rd row field desensitization policy substr (note: data to crop the note field from the 3 rd character) in FIG. 3 is applied to the note field of the HQL after overwriting (a) in FIG. 7, so that the notes of the two records of (b) in FIG. 7 are cropped versions of the notes in FIG. 2 from the third character.
And step S4: and checking the HQL authority.
Based on QueryState, semanticAnalyzer and HQL, a static doAuthorization method of Driver is called to realize HQL permission verification. And if the HQL authority verification is successful, normally returning, otherwise, throwing an abnormal authority verification failure. The bottom layer of authority check of Driver depends on the datablackhigheauthorizer class provided by the invention. The DataBlackHiveAuthorizer class is an implementation class of a HiveAuthorizer interface of Hive and realizes three methods of checkPrivileges, applyRowFilterAndcolumningking and needTransform. Fig. 8 shows the details of the underlying flow of Driver rights checking. The doauthration of Driver parses the HiveOperationType, the input and output HivePrivilegObjects and the authentication context of HQL, and then calls the checkPrivileges method of DataBlackHiveAuthorizer. The checkPrivileges method pulls the user right policy to the data right management center, parses the tables and fields and operation types referred to in the input and output hiveprivileobjects, and then matches the user right policy. If all the input and output HivePrivilegObject objects pass the permission verification, the method returns normally, otherwise, the permission verification failure exception is thrown out. Finally, the checkPrivileges method also sends a detailed log of the authentication audit to the data authority management center for audit analysis.
Step S5: HQL table, field blood margin analysis.
And when the HQL authority passes the verification, HQL table and field blood margin analysis can be performed. The method specifically comprises the following steps:
s501, based on HiveConf, queryState, semanticAnalyzer and HQL, queryPlan and HookContext objects are created.
S502, calling the run method of the Java ColumnLinearanalysis provided by the invention, and returning the table and field blood relationship in the HQL. ColumnLineageAnalysis is an inheritance class of the LineageLogger of Hive, a run method in the inheritance class is rewritten, and the table and field blood relationship of HQL can be returned. For example, the table and field context relationship of the HQL in (a) in fig. 7 is as shown in fig. 9, that is, in the HQL query result in fig. 7, the target fields id, private _ part, and note are respectively from the field id, private _ part, and note of the tables _ tbl _ test in the Hive database Hive _ test, where id and private _ part are direct mappings of the source fields to the target fields, and the note is transformed by a substr (note, 3).
And S503, the Hive data authentication agent sends the HQL authentication result, the rewritten HQL, the table and field blood margin analysis result to an HQL submission module based on JDBC. If the HQL authentication passes, the JDBC-based HQL submission module submits the rewritten HQL to the group big data cluster management platform 100 for execution. Because the Hive data authentication agent unit 500 performs data authentication for employees/users, it can meet the requirement of finer-grained data authority control inside enterprises.
The key technical points of the invention comprise: 1) The technical scheme of the invention is designed, such as interactive logic and functional responsibility among a data authority management center, a Hive data authentication agent and JDBC-based HQL submission; 2) The technical implementation scheme of HQL analysis, rewriting and authentication and table and field blood margin analysis based on Hive prototypes is provided by the invention; 3) The invention realizes the method logic of checkPrivileges of Java class DatabeackHiveAuthorizer related to security authentication.
The HQL parsing and rewriting in step S3 may also traverse the astinode through the ANTLR4 visitor mode, and analyze the table and field resources related to the HQL and the corresponding access method. However, this approach has limited accuracy and cannot cover complex HQL scenarios, and thus generally cannot meet on-line production standards.
In another aspect, the present invention further provides a fine-grained Hive data right control agent layer system, where the system is configured to implement the fine-grained Hive data right control agent layer method according to the present invention.
In the description herein, references to the description of the terms "embodiment," "example," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Moreover, various embodiments or examples and features thereof described in this specification may be combined or combined without creating inconsistencies by those skilled in the art.
Although embodiments of the present invention have been shown and described, it is understood that the above embodiments are illustrative and not to be construed as limiting the present invention, and that modifications, alterations, substitutions, and alterations may be made to the above embodiments by those of ordinary skill in the art without departing from the scope of the present invention.

Claims (5)

1. A Hive data right control agent layer method, which is characterized in that the method comprises the following steps:
s1: hive data authority application; when the data authority approval service of a department approves the Hive data authority application of the employee, the data authority management center synchronously creates a corresponding data authority strategy, stores the corresponding data authority strategy in the table and field authority module and the row filtering and field desensitization module, and updates the mapping relation between the employee and the data authority strategy in the user authority management;
s2: resolving HQL; before HQL analysis, the Kerberos authentication of tenants needing to be carried out by utilizing Keytab of tenants interacting with a group big data cluster management platform by departments is needed, so that data isolation among the tenants is realized;
s3: HQL rewriting; in the process of analyzing the HQL, the SemanticAnalyzer performs line filtering and field desensitization rewriting on the HQL through a Hive TableMask object;
s4: checking HQL authority; based on QueryState, semanticAnalyzer and HQL, calling a static doAuthorization method of Driver to realize HQL data authority verification;
s5: HQL table, field blood margin analysis; when the HQL authority passes the verification, carrying out HQL table and field blood margin analysis;
after applying for the authority of the Hive table, the employee submits the Hive task, an HQL submitting module based on JDBC sends HQL and identity information of the employee to a Hive data authentication agent unit for authentication, and the HQL authentication process comprises HQL analysis, HQL rewriting, HQL authority verification, HQL table and field blood margin analysis;
in the step S2, after the kerberos authentication is passed, a HiveConf is created; the creation of HiveConf depends on Hadoop and Hive configuration files provided by a group big data cluster management platform, wherein the configuration files comprise core-site.xml, hdfs-site.xml, map-site.xml, yarn-site.xml and Hive-site.xml;
in step S2, HQL parsing includes the following sub-steps:
s201, creating a Session State object by using HiveConf, and setting the userName of the Session State object as an employee account submitted with HQL;
s202, starting a Session State object, setting a current database as a Hive database applied by a department on a big data cluster management platform, and initializing a transaction manager; after the sessionState object is created and started, the sessionState object is effective and unique, can communicate with Hadoop to submit distributed tasks, and can also be connected with a Hive metadata base to inquire metadata information;
s203, sequentially creating QueryState, context and ParseDriver objects; calling a parse method of the ParseDriver object to analyze the original HQL into abstract syntax tree nodes; generating a SemanticAnalyzer corresponding to QueryState and ASTNode by using a get method of Hive SemanticAnalyzer factor;
s204, calling an analyze method of the SemanticAnalyzer to analyze the HQL;
in step S3, the HQL rewriting process includes the following substeps:
s301, traversing and analyzing ASTNode of HQL to obtain table and field information;
s302, a table corresponding to the userName of the Sessionstate and a row filtering and field desensitization authority strategy of a field are pulled from a data authority management center through a DatablackHiveAuthorizer, and an applyRowFilterAndcolumnaging method is called so that a TableMask object can correctly acquire a row filtering and field desensitization expression;
s303, rewriting the Token stream of the original HQL according to the line filtering and field desensitization expression, and storing the latest Token stream of the HQL in a Context object;
in the step S4, if the HQL authority verification is successful, returning normally, otherwise, throwing out the abnormal authority verification failure; wherein, the authority verification bottom layer of the HiveDriver depends on a DatabeackHiveAuthorizer class; the DataBlackHiveAuthorizer class is an implementation class of a HiveAuthorizer interface of Hive, and realizes methods of checkPrivileges, applyRowFilterAndColumnMasking and needTransform; the doAuthorization of the Driver analyzes the HiveOperationType, the input and output HivePrivilegObjects and the authentication context of the HQL and then calls a checkPrivileges method of a DatablackHiveAuthorizer; the checkPrivileges method pulls a user authority policy to the data authority management center, analyzes tables and fields related in the input and output HiveprivileObject and operation types, and then matches the user authority policy; if all the input and output HivePrivilegObject objects pass the permission verification, the method returns normally, otherwise, the permission verification failure exception is thrown out;
in step S5, the HQL table and field blood relationship analysis specifically includes the following steps:
s501, based on HiveConf, queryState, semanticAnalyzer and HQL, creating QueryPlan and HookContext objects;
s502, calling a run method of Java ColumnLinearanalysis, and returning the table and field blood relationship in the HQL; columnLineageAnalysis is an inheritance class of the LineageLogger of Hive, a run method in the inheritance class is rewritten, and the table and field blood relationship of HQL can be returned;
and S503, the Hive data authentication agent sends the HQL authority verification result, the rewritten HQL, the table and field blood margin analysis result to an HQL submission module based on JDBC.
2. The Hive data authority control agent layer method of claim 1, wherein if HQL authentication passes, the JDBC-based HQL submission module submits the rewritten HQL to a group big data cluster management platform for execution.
3. A Hive data authority control proxy layer system, characterized in that the system is used for realizing the Hive data authority control proxy layer method according to any one of claims 1-2.
4. The Hive data authority control agent layer system according to claim 3, wherein the system comprises a group big data cluster management platform, is used for opening and registering tenants for each department using the platform, and configures the Hive database and HDFS file directory authority for each tenant in advance through Range Admin; the HDFS plug-in also comprises a Hive plug-in and an HDFS plug-in which are embedded into the HiveServer 2; the system is used for periodically pulling the authority policy from the Ranger Admin and storing the authority policy in a local policy repository.
5. The Hive data right control agent layer system according to claim 3 or 4, further comprising a data right management center, wherein the data right management center is provided with a table and field right module, a row filtering and field desensitizing module and a user right management module; wherein, the table and field authority defines data authority from metadata dimension; line filtering and field desensitization define data permissions from data dimensions;
and the Hive data authentication agent unit performs HQL analysis, HQL rewriting, HQL authority verification and HQL table and field blood margin analysis services.
CN202210818903.5A 2022-07-13 2022-07-13 Hive data authority control agent layer method and system Active CN114896584B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210818903.5A CN114896584B (en) 2022-07-13 2022-07-13 Hive data authority control agent layer method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210818903.5A CN114896584B (en) 2022-07-13 2022-07-13 Hive data authority control agent layer method and system

Publications (2)

Publication Number Publication Date
CN114896584A CN114896584A (en) 2022-08-12
CN114896584B true CN114896584B (en) 2022-10-11

Family

ID=82729760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210818903.5A Active CN114896584B (en) 2022-07-13 2022-07-13 Hive data authority control agent layer method and system

Country Status (1)

Country Link
CN (1) CN114896584B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115203750B (en) * 2022-09-19 2022-12-16 杭州比智科技有限公司 Hive data authority control and security audit method and system based on Hive plug-in

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3059690A1 (en) * 2015-02-19 2016-08-24 Axiomatics AB Remote rule execution
CN110138779A (en) * 2019-05-16 2019-08-16 全知科技(杭州)有限责任公司 A kind of Hadoop platform security control method based on multi-protocols reverse proxy
CN110175164A (en) * 2019-05-27 2019-08-27 浪潮软件股份有限公司 A kind of method of SparkSQL thriftserver inquiry and the permission control for operating Hive
CN111813796A (en) * 2020-06-15 2020-10-23 北京邮电大学 Data column level blood margin processing system and method based on Hive data warehouse
CN112329031A (en) * 2020-10-27 2021-02-05 国网福建省电力有限公司信息通信分公司 Data authority control system based on data center
CN113626438A (en) * 2021-08-12 2021-11-09 深圳平安智汇企业信息管理有限公司 Data table management method and device, computer equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10033765B2 (en) * 2015-01-08 2018-07-24 BlueTalon, Inc. Distributed storage processing statement interception and modification
CN107122406B (en) * 2017-03-24 2020-08-11 东华大学 Data field-oriented access control method on Hadoop platform
CN107895113B (en) * 2017-12-06 2021-06-11 北京搜狐新媒体信息技术有限公司 Fine-grained data authority control method and system supporting hadoop multi-cluster

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3059690A1 (en) * 2015-02-19 2016-08-24 Axiomatics AB Remote rule execution
CN110138779A (en) * 2019-05-16 2019-08-16 全知科技(杭州)有限责任公司 A kind of Hadoop platform security control method based on multi-protocols reverse proxy
CN110175164A (en) * 2019-05-27 2019-08-27 浪潮软件股份有限公司 A kind of method of SparkSQL thriftserver inquiry and the permission control for operating Hive
CN111813796A (en) * 2020-06-15 2020-10-23 北京邮电大学 Data column level blood margin processing system and method based on Hive data warehouse
CN112329031A (en) * 2020-10-27 2021-02-05 国网福建省电力有限公司信息通信分公司 Data authority control system based on data center
CN113626438A (en) * 2021-08-12 2021-11-09 深圳平安智汇企业信息管理有限公司 Data table management method and device, computer equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Design an Implementation of Bee Hive in a Mult-agent Based Resource Discovery Method in P2P Systems;Jun Yamasaki等;《2010 First International Conference on Networking and Computing》;20110220;全文 *
基于HBase的细粒度访问控制方法研究;黄良强等;《计算机应用研究》;20200616;第37卷(第03期);全文 *
基于Hive的数据管理图形化界面的设计与实现;左谱军等;《电信工程技术与标准化》;20140115;第27卷(第01期);全文 *

Also Published As

Publication number Publication date
CN114896584A (en) 2022-08-12

Similar Documents

Publication Publication Date Title
US10348774B2 (en) Method and system for managing security policies
US9852206B2 (en) Computer relational database method and system having role based access control
US8819068B1 (en) Automating creation or modification of database objects
US8955037B2 (en) Access management architecture
CN112765245A (en) Electronic government affair big data processing platform
Hu et al. Guidelines for access control system evaluation metrics
WO2020238359A1 (en) Partition authorization method, apparatus and device, and computer-readable storage medium
US20140237550A1 (en) System and method for intelligent workload management
CN113392415A (en) Access control method and system for data warehouse and electronic equipment
CN111552953A (en) Security policy as a service
US20240007458A1 (en) Computer user credentialing and verification system
US20220147399A1 (en) Declarative language and compiler for provisioning and deploying data centers on cloud platforms
CN114896584B (en) Hive data authority control agent layer method and system
CN112905978B (en) Authority management method and device
US11848829B2 (en) Modifying a data center based on cloud computing platform using declarative language and compiler
US20210286616A1 (en) Configuration-driven applications
CN117785975A (en) System for checking service data by using rule set
US9336408B2 (en) Solution for continuous control and protection of enterprise data based on authorization projection
Batra et al. Policy driven data administration
US20190312870A1 (en) Consolidated identity management system provisioning to manage access across landscapes
JP7409735B1 (en) Operational design document creation device
Nguyen et al. An extended model-based characterization of fine-grained access control for SQL queries
Meena et al. Efficiently Supporting Attribute-Based Access Control in Relational Databases
Schöbel et al. Proposal of a Permissioned Blockchain Network To Supervise Cashflow in Large-Scale Projects
CN118656874A (en) Method and system for realizing row-column access control of database

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant